PARAMETRIC APPROXIMATION OF DATA USING A THESIS

51
PARAMETRIC APPROXIMATION OF DATA USING ORTHOGONAL DISTANCE REGRESSION SPLINES by SCOTT REAGAN FRANKLIN, B.S., B.A. A THESIS IN MATHEMATICS Submitted to the Graduate Faculty of Texas Tech University in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE Approved August, 2000

Transcript of PARAMETRIC APPROXIMATION OF DATA USING A THESIS

Page 1: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

PARAMETRIC APPROXIMATION OF DATA USING

ORTHOGONAL DISTANCE REGRESSION SPLINES

by

SCOTT REAGAN FRANKLIN, B.S., B.A.

A THESIS

IN

MATHEMATICS

Submitted to the Graduate Faculty of Texas Tech University in

Partial Fulfillment of the Requirements for

the Degree of

MASTER OF SCIENCE

Approved

August, 2000

Page 2: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

ACKNOWLEDGMENTS

I would like to offer my sincerest gratitude to all those who have participated and T3

^ 2 supported me during the research and writing of this paper. Specifically. I would like

_ — t o thank my wife, Lori, and the newest addition to our family, my daughter, Emilv

0 Diane. Also, I extend my appreciation to Joel McElrath in providing one of the most

useful tools in my research, the notebook computer on which it was almost entirely

produced. In addition, I would like to offer my appreciation to Carrie Mahood for the

time she dedicated in helping me finalize this paper. Lastly, I thank my advisor, Dr.

Phil Smith, who has helped me to find a topic of great interest and led me through

much of the necessary research and programming, providing many useful tools such

as the LSGRG2 code mentioned in the paper and access to the High Performance

Computing Center.

n

Page 3: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

CONTENTS

ACKNOWLEDGMENTS ii

LIST OF TABLES v

LIST OF FIGURES vi

1 INTRODUCTION TO DATA APPROXIMATION USING B-SPLINES . . 1

1.1 Polynomial Interpolation: Divided Differences and Newton Form . . . 1

1.2 Definition and Properties of B-Splines 3

1.2.1 B-splines have small support 3

1.2.2 B-splines are normalized 4

1.2.3 B-splines are positive on their support 4

1.2.4 B-splines are linearly independent 4

1.3 Linear Independence of B-Splines using Peano Kernels 4

1.3.1 Case 1: First-Order Splines 8

1.3.2 Case 2: second-order SpUnes 9

1.3.3 Case 3: General Order Splines 11

1.4 Least Squares Approximation using B-Splines 13

2 UNIVARIATE PARAMETRIC APPROXIMATION USING ODR SPLINES 15

2.1 Statement of the Problem 15

2.2 Description of Numerical Methods that Solve the Problem 17

2.2.1 A Short Explanation of LSGRG2 [5] 17

2.2.2 Various Solution Algorithms using LSGRG2 and Least-squares

Approximation of B-Spline Coefficients 18

2.2.2.1 Method 1 18

2.2.2.2 Method 2 19

2.2.2.3 Method 3 20

2.2.2.4 Methods 4 and 5 21

2.3 Numerical Examples: Comparing Methods 22

2.3.1 Example 1, Simple Examples with Data Shuffling 22

ni

Page 4: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

2.3.2 Example 2, Vertical Data 25

2.3.3 Example 3, Helix Data with Noise 30

2.3.4 Example 4, Random Data with Varying Initial Guesses . . . . 31

3 AN EXTENSION TO THE BIVARIATE APPROXIMATION PROBLEM 35

3.1 Statement of the Problem 35

3.2 Description of Method 36

3.3 Numerical Examples 37

3.3.1 Example 1, A Paraboloid 37

3.3.2 Example 2, Ring Torus 38

3.3.3 Example 3, Spindle Torus 39

4 THEORETICAL AND COMPUTING EXTENSION OF THE PARAMET­

RIC APPROXIMATION PROBLEM 41

4.1 Minimal Order of Parametric Approximation of Data Using Polynomials 41

4.2 Possibilities for High Performance and Parallel Computing 42

BIBLIOGRAPHY 44

IV

Page 5: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

LIST OF TABLES

2.1 Unshuffied Data in Example 1 24

2.2 Example 2, two-dimensional Vertical Data 28

2.3 Example 2, three-dimensional Vertical Data 29

Page 6: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

LIST OF FIGURES

1.1 Case 1,/j on sequence t for j = 1, . . . ,n 8

1.2 Case 2, Construction of/j in three possible ways 10

2.1 Data's Distance from a Curve 16

2.2 3-D Butterfly Curve 22

2.3 Example 1 with Unshuffled Data, Methods 4 and 5 25

2.4 Example 1 with Shuffled Data, All Methods 26

2.5 Example 2, two-dimensional Vertical Data 27

2.6 Example 2, Seemingly Unbounded Spline Coefficients 28

2.7 Example 2, Unbounded Coefficients in 3-D Case 29

2.8 Example 2, three-dimensional Vertical Data 30

2.9 Basic Helix 31

2.10 Example 3, Noisy Helix Data Approximated by Method 5 32

2.11 Example 4, Random 2-D Data with Varying Initial Guesses 33

2.12 Varying Initial Guesses on Shuffled Data in Example 1 34

3.1 Example 5, Paraboloid and Approximated Paraboloid 38

3.2 Example 5, Progression toward Approximation of Paraboloid 38

3.3 Example 5, Ring Torus and Approximated Ring Torus 39

3.4 Example 5, Progression toward Approximation of Ring Torus 39

3.5 An Outside and Inside View of the Spindle Torus 39

3.6 Example 5, An Approximated Spindle Torus from Outside and Inside 40

VI

Page 7: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

CHAPTER 1

INTRODUCTION TO DATA APPROXIMATION USING B-SPLINES

In qualitative analysis of a mathematical or experimental nature, the investigation

and examination of data is inevitable. The goal of such an investigation is to conclude

something about the behavior of that data so that ultimately predictions can be made

concerning various cases of a problem or other similar problems. In the first section

of this chapter, we introduce some basic facts about interpolating such data. That

is, we find a function that generates the exact data. We also introduce the notion of

divided diflPerences both of which will lead us to develop B-splines, our central tool of

data approximation in this paper.

1.1 Polynomial Interpolation: Divided Differences and Newton Form

One of the simplest tools for data approximation are polynomials due to the ease

with which they are evaluated, differentiated and integrated. We will denote the set

of all polynomials of order n as P„. An element of this set is said to have degree less

than n and is written in the form n

p{x) = 2_2 ^3^^ ^ = ai-\- a2X -I- . . . + OnX^

Theorem 1.1. Let n , r2 , . . . , r„ be distinct points and g(ri),g{T2),..., ^(r„) be given

data. Then there exists exactly one polynomial p G P^ such that p{Ti) = g{Ti)Ao\: i =

1 , . . . , n. The polynomial can be written in Lagrange form

n

p{x) = Y,g{nMx) (1.1) 1=1

with ii{x) = TT -,for i = 1,... ,n.

Although the proof is omitted, this can be easily proved using the Fundamental

Theorem of Algebra. This Lagrange form is named for the Lagrange polynomial

^ • ( - ) = n ^ (1-2) 1 J 3

Page 8: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

which has the special property

k[Tj) = Sij = <

Although the form is simple to write, the evaluation of the Lagrange form of the

interpolating polynomial is extremely expensive. Compared to other methods of

calculation, it is much less efficient.

An alternate method of calculating this interpolating polynomial, called Newton"s

form, makes use of a tool known as the divided difference.

Definition 1.1. The kth divided difference of a function g at the points Ti,..., Ti+k

is the leading coefficient (i.e., the coefficient of x^) of the polynomial of order k + I

which agrees with g at the points Ti,... ,ri+k (i-e., the interpolating polynomial). We

will denote this by

[r^,.. .,Ti+k] g-

This definition requires that we also address the case where certain r ' s are coin­

cident. To address this we introduce the following definition.

Definition 1.2. Given the sequence r = {TJ}"^! of points not necessarily distinct, we

say that the function p agrees with g at r provided that, for every point s which occurs

m times in the sequence TI, ... ,Tn, p and g agree m-fold at s. That is.

p^'-'^\s) = g^'-^\s) forz = l , . . . , m .

These definitions immediately yield the following property: If pi G Pf agrees with

p at r i , . . . , r for z = A; and i = k -\-1, then

Pk+l{x) =Pk(x) + (X - n) ' • • {X - Tk)[Ti, . . . ,Tk+i]g.

Or, written in another form, we have

Pk+l{x) - Pk(x) = (x-Ti)---(x-Tk)[Ti...., Tk+i]g.

Page 9: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

From this we obtain Newton's form of the interpolating polynomial:

P n ( ^ ) = Plix) ^ {P2(X) - pi{x)) + - • ' -i- {pn{x) - Pn-l(x))

= [Ti]g -f (a; - ri)[ri, r2]p + • • • + (5; - ri) • • • (x - r „_ i ) [ r i , . . . , r„]^.

or, written in summation form, n

PA^) = ^{x - n) • • • {x - Ti^i)[ri,... ,T^]g. (1.3) z=l

1.2 Definition and Properties of B-Splines

In this section we develop the notion of k-th order B-splines using divided differ­

ences and introduce several important properties of these resulting functions.

Definition 1.3. Let t = {Uj^^i be a nondecreasing sequence (where the ti's are not

necessarily distinct). We assume that ti < tij^k- The i-th (normalized) B-spline of

order k for the knot sequence t is denoted by Bi^k,t CL^d is defined by the rule

B^,kA^) = (t^+k - U)[t^,.. .,ti+k]t(t - x)l-\ V x G M. (1.1)

In this formula, [ti,..., ti-^.k]t{t — x)'^^ is the A:-th divided difference over the variable t

of the truncated power function at the knots, ti,..., ti+k- As we have seen, this divided

difference is the leading coefficient (i.e., the coefficient of t^) of the polynomial of order

A; -|- 1 which agrees with the truncated power function at t = ti,..., ti^k- Also recall

that the truncated power function is defined as follows:

f ( t - x ) ^ t> X (t-x)l={' - . (1.2)

[ 0, t<x

The following properties, given mostly without proof, illustrate some "nice" behavior

that these B-spline functions exhibit.

1.2.1 B-splines have small support

One immediate result, quite easily proven, is that Bi [ = Bi^k,t) has small support.

That is,

Bi(x) = 0 for X ^ [ti,ti+k]'

Page 10: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

As a result, only k B-splines of order A: for a knot sequence t may be nonzero on an

interval {tj,tj+i).

1.2.2 B-splines are normalized

We also have the special property that

s - l

J2B^(X)= Yl H^) = ^ Xe(tr,ts) i i=r-\-l—k

due to the normalizing coefficient {ti+k — ti) in our definition. Because ti < ti^k- there

must be at least on open interval in (ti,ti^k)- Let {tr,ts) be such an interval or union

of adjacent intervals.

1.2.3 B-splines are positive on their support

It can also be shown that Bi is positive on its support. That is

Bi{x) >0 \f X e (ti,ti+k)-

1.2.4 B-splines are linearly independent

In the following section, we will prove the fact that B-splines of order k are linearly

independent. Based on this fact, we introduce the following definition:

Definition 1.4. A spline function of order k with knot sequence t is any linear

combination of B-splines of order k for the knot sequence t . The collection of all such

functions will be denoted by Sk,t- Any function s G Sit,t can be written in the form

n

s(x) = Y^a^Bi(x). 1=1

In other words, the B-splines form a basis for Sk,t, hence the name Basis-splines.

1.3 Linear Independence of B-Splines using Peano Kernels

Let us begin with the following theorem:

Page 11: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

Theorem 1.2. B-splines of order k over some knot sequence t are linearly indepen­

dent. That is, n

y ^ c^iBi(t) = 0, V t G R 2/ and only if a = [ai, a„]^ = 0. z = l

The proof is an immediate result of the following Lemma:

Lemma 1.1. There exists a sequence of linear functional, {Xj}^^i, such that

XjBi = 5ij = <^ 1 < i,j < n. 1, i=j

n

If such functionals exist then for any a such that 2_\ (^iBi{t) = 0 we have

i=l

\j {Y,^^B^{t)\ = 0 , for j = l , 2 , . . . , n (1.1)

Because of the linearity of the functionals we can rewrite (1.1) as n

y ^ aiXjBi{t) = aj = 0,ioT j = l,...,n

thus showing that B-splines are linearly independent.

Proof, (of Lemma 1.1) In order to fully address this question let us begin by intro­

ducing the concept of the Peano kernels of a linear functional.

Definition 1.5. Let us consider a general linear functional of the form

^( / ) = E I f c^r{^)f^Hx)dx + ^ /3 , , . / « (x , , ) I , (1,2) i=o l-^" j=l J

where ^(af + bg) = ci(f>(f) + b(}){g), i.e., (f) is linear. (Note that this form is only

necessary for the sake of the behavior of the functional in the proof of the following

theorem.) The Peano kernels of the functional (f) are defined as

Km(t) =—Mi^ - t)7] form>N, (1.3) ml

where, again, {x — t)^ is the truncated power function as defined in (1.2). By Oj. we

mean the function (p taken over the independent variable x, as opposed to t.

5

Page 12: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

We say that a functional annihilates a space IT if <p{f) = 0 for all / G IT. The

main result from this consideration of the Peano kernels of a functional is given by

the following theorem:

Theorem 1.3. If the functional given by (1.2) annihilates the space of all polynomials

of degree less than or equal to m, denoted by f'm+i, then for all f G C^~^^[a, b],

't>(j)= j K„(t)f"'^'\t)dt. (1.4) J a

Proof. Recall the Taylor's theorem with Integral Remainder: If / G C"^^^[a, 6] and

c G [a, b] is fixed, then

m ..

fix) = ^ Tr/<*>W(X - C)* + i?„(x), (1.5) fc=0

where x G [a, b], and 1 px

Rm{x) = — / f^'^^'\t){x - trdt. (1.6)

We can rewrite (1.6) using the truncated power function:

Rmix) = —[ f^^^'\t){x - t)^dt. (1.7)

Now, because (f) annihilates P^, (j){f) = (l){Rm{x)). That is,

. ^ - J a

We can now pass the functional (f) under the integral sign, which is justified by certain

theorems in calculus and the form (1.2) that we adopted for a general linear functional.

At this point we have our result:

0(/)= r^^xKx-t)?]/""^'*^)^*-

Or, to rewrite using (1.3),

0(/)= f K^(t)f^'^^'\t)dt J a

Hf) = < ! > ' '

n

6

Page 13: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

Consider the functionals 0^/ = [ti,,.. ,ti^k]f that are related to our earlier con­

sideration of some knot sequence t and a divided difference of order k. Clearly the

sequence of functionals 0 = {</>i}iLi is linear by properties of the divided difference.

More importantly, we can note that each (/)i annihilates P ;. Hence, by Theorem 1.3,

given / G C'^[a, b], then

"^'^ = / ( ^ ^ [ ^ ^ ' • • •' ^^+^1^^^ ~ ^)+~' ^^'^W dt.

This can also be rewritten in the following form by simple algebraic manipulation:

^ • ^ ^k - l)\{t,^, - t,) / ^^'^' ~ ^ [ ' • • •' ^^+'1^^^ " ^)+"' - ' ^ ^ ' •

Therefore, we have the following relationship between the Peano kernels and the

B-splines:

4 / = ., . . , / . TT f B,(t) f^'\t) dt. [k- l)\[ti+k -ti) J a

At this point we have developed the following claim: For all / G C'^[a, b],

Now let us consider a second sequence of functionals, A = {^j}l=i-, such that, for

j = l,...,n,

The functionals 0 and A also possess the following relationship:

(t)^fJ=\,B,. (1.10)

It is now our goal to judiciously construct each fj so that XjBi — ij- However, due

to Equation (1.10) we merely need to construct fj so that (t)ifj — Sij, or

[ti, . . . , ti+k\fj — 0 z j .

It is interesting to note that this is a much simpler task due to the fact that the value

of the divided difference only depends on the values of / over the sequence t. .\t this

7

Page 14: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

a

tj.2 ^ . . ti '^ i+1 tj+2 U-^3

Figure 1.1: Case 1, fj on sequence t for j = 1, n

point we will consider two particular cases: A: = 1 and k = 2, where we hope to detect

a pattern that will allow us to construct a general fj for any value of A;.

Another important note at this point is that because each fj need only to be

defined on the sequence t, the sequence A is not unique. In fact, in our current form

each Xj is specified in integral form, but we shall see that they can also be expressed

in the summation form that appears in [1].

1.3.1 Case 1: First-Order Splines

First let us consider the case A: = 1 (i.e., (I)ifj = [ti, tj+i]/j). We begin constructing

fj by defining it only on the nodes {..., tj_i, tj,tj+i, t^+i,...} for j = 1 , . . . , n as in

Figure 1.1. If we define fj in this form we clearly have

[tiiti+l\fj — 0, ^/j

c, l=J

Let us now find a where we will have c = 1. We consider where i = j . To accomplish

this we must recall the recurrence relation of the divided difference given by

r , , T [ti, • • • itr-ljtr+l, . • • ,ti^k\g ~ [^n • • • ? s - 1 ? ^s + l , • • • , ^z+fcJS' (^ ^^\

[ti,...,ti+k\g = T z 7 ' vi-ii) ig by

where tr and 5 are any two distinct points in the sequence t. It follows that

fi{ti+l) - fi{tr) _ a [til ti+l\fi

ti-\-l — ti

8

ti-\-\ ~ ti

Page 15: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

since by the definition of B-splines we require that f, < ,+1 (or in general ^ < f,^^).

We set this equal to 1 and immediately have that a = ti+i - t,.

Therefore, any function fj that takes on the values

f3its)={ y tj+i -tj, s > j

will produce the the sequence of functionals {Aj}"^i such that XjBi — 6ij when A' = 1.

1.3.2 Case 2: second-order Splines

Now let us consider the second-order case: k = 2. Our functional is then defined

by (pifj = [ti, ti^i,ti+2]fj- It is very important to note that the only restriction on the

knot sequence t is j < ti+k- This implies that either the first or last two knots may

be coincident. In other words, we may have ti < t +i or ti+i < ti^2 or both.

Let us consider the possible constructions oi fj as illustrated in Figure 1.2. Clearly

in each case we have the following:

\ti. ti-s^i. ti-^2\fj = \

[ c(^0), 1 = 3

In the first two cases, we must find a such that [tf, tj+1,^1+2]/i = c = 1. In the third

case we will see something different occur.

Case i: In this case we have ti < tij^i < ti+2 and obtain the following results:

r, , J. ]f _ [ti+l'ti+2]fi — [ti-ti+l]fi [li,li+l.li+2\ji — T 7

H+2 ~ U

_ [ti+l:ti+2]fi

ti+2 ~ tt

_ fi{ti+2) — fijtj+l)

{ti+2 ~ ti)[ti^2 ~ ti+i)

A^+2) (^;+2 — ^z)(^+2 — ti+i)

This implies that a = {ti+2 — ti)[ti+2 — ^i+i)- Ve let

0, t < L+i + r,

[tj+2 - tj)[t - tj+i). t > tj+i + IJ

9

Page 16: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

Case i

a

•i—• f \ H

Case ii

a-

-•—H f-(rl fr

Case Hi

— t — • — I , 1 1 \ • ti t,., . . .

Figure 1.2: Case 2, Construction of fj in three possible ways

where TJ < rj and TJ,TJ G (tj+i,tj-^.2). Under this particular construction of fj,

<i>ifj = ^jBi = Sij.

Case ii: In this case we have ti

[ti, ti+i, ti+2\fi

[ti, ti+i, ^i+2j/i

ti+i < ti+2 and obtain the following results:

[ti+l,ti+2\fi — [ti,ti+i\fi

ti+2 ~ ti

[ti+l,ti+2\fi — [ti,ti]fi

ti+2 ~ ti

[ti+l,ti+2]fi — fi(ti)

ti+2 ~ ti

However, //(^i) = 0 yields

[ti,ti+l,ti+2]fi — fi{ti+2)

(ti+2 — ti){ti+2 — ti+l)

As before, a = (ti+2 — ti)(ti+2 — ti+i)- Clearly we can use the same construction of fj

10

Page 17: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

since f'j(tj) = 0 for all j . In other words, let

/.(*)=( °' *<*^>'+'-^ [ it3+2-tj)(t-tj+i), t>tj + i+T'j

where TJ < r- and TJ, rj G (t^+i, ^^+2)- For this we still have (l)ifj = XjBi = ^ij-

Case iii: In this case we have ti < ti+i = ti+2 and obtain the following results:

\+ + -h 1^ _ [ti+l^ti+2]fi — [ti,ti+i]fi [H,H+l,T'i+2\Ji — 7

ti+2 ~ ti

U ^ + \£ _ \ti+2,ti+2\fi \H,H+\,^i+2\ii — —

ti+2 ~ ti

ti+2 ~ ti

However, let //(ti+2) = ti+2 - ti and we have [ti,ti+i,ti+2]fi = 1. As before the

construction of fj in the form

Mt)={ '' *"* ^ ^ : [ (tj+2 - tj)(t - tj+i), t > tj+i + rj

gives us exactly this property. So for all three cases we use the same construction of

fj and have the result ^j/j = XjBi = ^ij-

1.3.3 Case 3: General Order Splines

Following the pattern set in the previous cases, let us construct fj such that

[ti,..., ti+k]fj = Sij. Consider the following function:

/ ; W = rit)(tj+k - tj)(t - tj+i) -'-(t- tj+k-i),

where i Q, t< ti

r(t) = ' - ' . (1.13) y 1, t > tj+k

Since we are concerned only with the values at particular nodes, notice that

fj(ts) = 0 for s = j + 1 , . . . , j + A; - 1 whether r = 0 or r = 1. From this we

11

Page 18: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

obtain the result: ii i = j ,

r, .L ] f _ \ti+l, • • • , ti+k\fi — [ti, . . . , ti+k-l]fi [H, . • • , l^i+k\Ji — 7

T ~ (1-14) _ [ti+k — ti) — 0

^i+k ^i

This is true by the definition of divided differences, as it is the leading coefficient

of the interpolating polynomial which is exactly what fj is for t = tj+i,... .tj+k-

On the other hand, for t = tj,... ,tj+k-i, we have fj = 0. Also, it is clear that

[ti,... ,ti+k]fj = 0 ii j ^ i. At this point we have proven that we can construct Xj

such that XjBi = ^ij- D

Having finished the proof of linear independence, we must note that the sequence

of linear functionals {Aj}"^i is not necessarily unique, because we only need the value

of fj and possibly its derivatives at a discrete set of points in the knot sequence t. In

fact, the form in the exposition given in [1] varies from the form we have given here.

Nevertheless, this from is easily derived from ours.

Let us begin this derivation by completing the construction of fj given in the

general order case. Let fj = r(t)ipj(t) where

i^3(t) = {tj+k - tj)(t - tj+i) •••(t- tj+k-i).

Let us complete our definition oir(t) partially given in Equation (1.13). Let us define

r G C°°(M) such that

0, t <tr + Tj r(t) =

1, t>tr+i+T'j

In this construction we let (tr,tr+i) C (tj,tj+k) be an open interval which must exist

since tj < tj+k- We choose TJ and rj such that

tj ^tr< Tj < Tj < tr+l < tj+k-

Now, consider the following:

^ i / ? = ^jBi = j B,(t)ff\t)dt = S, J a

12

Page 19: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

However, fj(t) G Pfc for t ^ (TJ,T-), which implies

Let us now integrate by parts A: times, resulting in

k-l

XjBi = (^33yyi ^3^X:(-i)' l"w/f-'-"w ]+f\-ir^Bi^\mt)dt r=0

The last term is zero since B^ '(t) = 0 for alH G R when the splines are of order

A;, which we have assumed from the beginning. Also notice that each term of the

summation is zero for t = TJ by our construction of fj(t) = r(t)il)j(t). However

r(rj) = 1. At this point we have

.. k-\

jk A.B.=,,-^^i^^E(-i)'B«(,'),r-^)(.p.

We take the limit as rj -^ TJ and obtain

^-^'=(.-l)!(L-.)g-^^^^'"(-^)^"""(->)-where TJ G (tj,tj+k). This is precisely the form with appears in [1].

1.4 Least Squares Approximation using B-Splines

We can now use the normalized B-splines to create an approximation of data, or

more specifically, a least squares approximation. Recall that a least squares approx­

imation is the best approximation with respect to a specific norm derived from an

inner product. The discrete inner product has the general form

n

{f,g) = ^.f(Ti)g(Ti)wi, i=l

where r is a sequence of data points and Wi are weights specific to the inner product

considered. We will use the common symbol || • || for the norm generated by the above

inner product, and given by the formula | | / | | = {f,f)^-

13

Page 20: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

Consider the specific space defined in Section 1.2, Sk,t for which the B-splines

form a basis. This space is a convenient choice for finding s* G Sk,t such that

11/-5*11= min 11/-.11

based on the fact that it is relatively well-conditioned for moderate k and the coeffi­

cient matrix for the normal equations

n

^{Bi,Bj)cj = {B^,g), for i = l,...,n. 3=1

has a bandwidth of less than A:. Many of the details of this approximation are omitted

from this paper and we refer the reader to [1] for further exposition.

In the following approximation methods we take advantage of program code pre­

sented in that text that calculates the coefficients of the B-splines for the function s*

that best approximate a set of data. Code is also provide that evaluates that B-spline

function at some point x G R.

14

Page 21: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

CHAPTER 2

UNIVARIATE PARAMETRIC APPROXIMATION USING ODR SPLINES

In this chapter, we will attempt to approximate coordinate data in space which

possesses no explicit parameterization by some parametric curve. We will deal mostly

with the numerical aspects of a parametric orthogonal distance regression problem,

which arises naturally in the context of geometric data fitting.

2.1 Statement of the Problem

Let us now consider m data points / = {/i}-^i = {(xi.yi)}^^. We will begin by

addressing a two-dimensional data problem, but the following algorithms will extend

naturally into a three-dimensional case, and even n dimensions, with arbitrary n > 3.

Our ultimate goal will be to fit a parametric curve to the data and visually represent

the data by this curve. We choose to measure the accuracy of the parametric curves

using the Euclidean distance formula, \\h\\ = \/J2i=i{^i)'^^ where N is the dimension

such that h G R^. For our particular problem, let us find the parametric curve,

g(t) = (x(t),y(t)), where t G [0,1], that minimizes the value

m

E(9J) = T.\\f'-9{riW- (2.1) i=l

That is, we are trying to find min {E(g, f)}. We note that this is not the classical T, 9

measure of error in approximation problems as we have not chosen the value of the

r 's beforehand but include them as variables to be minimized. The only restriction

we place on the r 's is that they be contained in the unit parametric interval [0,1].

The next restriction we place on Equation (2.1) is to select the approximation

function g from the spline space, Sk,t, as defined in Section 1.2.4. With this restriction

we now limit our discussion to the following approximation problem: find s* such that

^(s*, / ) = inf {^(s, / ) : s = {s,, S2}, Si, .2 e S,,t}, (2.2)

15

Page 22: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

s(c,t)

Figure 2.1: Data's Distance from a Curve

n n

where si(t) = y^aiBi^kAt) and S2(t) = 'S^^iBi^kAt)- Let us denote the set of i=l i=l

coefficients as c = <{ {a}^^-^, {/3}JLi >. We are trying to find the parametric curve

s* that produces the infimum in Problem (2.2), or equivalently, find the coefficients,

c, that produce the approximating spline curve s*. We refer to such curves (if they

exist) as orthogonal distance regression (ODR) splines.

Let us consider some arbitrary parametric spline curve

n n

s(^'^) = { ^(^iBi,kAt):^PiBi,kAt) 1=1 i=l

over the unit parametric interval. We will consider this as our initial guess of the

approximation curve. Now, we wish to minimize the distance from each data point to

this curve. In considering the distance from a point to a curve, we mean the smallest

distance from that data point to any point on the curve. Refer to Figure 2.1. In

essence, we want to minimize the function

m

or written explicitly.

m

G(c,t) = y^ min ^ * e [ o , i ] 1=1

G(c,t) = y2 min l|s(c,t),/i z = l

Y^ aiBi^kA^) -^i] + ( XI PiBi,kAt) - Vi

(2,3)

(2.4)

In the following sections in this chapter, we explain various approaches taken to

implement this problem in general computing environment. Following this explana­

tion, we walk through several numerical examples emphasizing significant behaviors

16

Page 23: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

exhibited by this approximation method. This leads to the natural extension of these

algorithms into a three-dimensional case. For the sake of time and space we merely

mention possibilities for high performance computing (e.g., parallel programming) in

the final chapter.

2.2 Description of Numerical Methods that Solve the Problem

Much of the work done in computing the solution involves minimizing some non­

linear objective function with the only constraint being that r G [0,1] for i = 1 , . . . , m.

We make use of a software package LSGRG2 developed Leon S, Lasdon with Allen D.

Warren at Optimal Methods, Inc. and distributed by Windward Technologies, Inc.

We begin our description of the computational algorithms with a short explanation

of this software.

2.2.1 A Short Explanation of LSGRG2 [5]

LSGRG2 is a program that solves constrained nonlinear optimization problems of

the following form: Minimize or maximize gp(X), subject to

glbi < gi(X) < gubi ioi i = 1,... ,m, i j^ p

xlbj < Xj < xubj for j = 1 , . . . , m

X is a vector on n variables, xi,... ,Xn, and the functions gi,... ,gm all depend on

X.

Any of these functions may be nonlinear. Any of the bounds may be infinite and

any of the constraints may be absent. If there are no constraints, the problem is

solved as an unconstrained optimization problem. Upper and lower bounds on the

variables are optional and, if present, are not treated as additional constraints but

are handled separately. The program solves problems of this form by the Generalized

Reduced Gradient Method. LSGRG2 has features to deal with the realities of large

optimization problem solving. LSGRG2 is capable of solving problems with several

thousand variables and several thousand constraints due to the utilization of sparse

matrix algorithms for matrix operations.

17

Page 24: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

Only problem specification information is required to be supplied by the user.

There are many options that are available for the experienced user to gain control

over the solution process. LSGRG2 is a fully-tested and robust algorithm backed

by more than 15 years of solving real-world problems in the petroleum, chemical,

defense, financial, agriculture, and process control industries.

The algorithm for large-scale constrained optimization used by LSGRG2 is de­

scribed in [9].

2.2.2 Various Solution Algorithms using LSGRG2 and Least-squares Approximation of B-Spline Coefficients

In the process of designing code to produce a solution to the problem described in

the previous section, many possible approaches present themselves. It is important

to note that we have both the r 's and the spline coefficients over which to optimize.

In addition, we have a general optimization code that will minimize any objective

function. We also have at our disposal a least squares code for finding the coefficients

of the best approximating spline curve given a set of two-dimensional data points.

Overall, five different algorithms have been developed for the computation of a

solution. At the core they are all essentially searching for the same solution, but due

to numerical differences the final result from each algorithm may vary, slightly or

not so slightly. In addition some methods are more vulnerable to singularities, data

ordering, and variations in the initial guess. As with most optimization algorithms,

these methods still depend highly on the quality of the initial guess. Some attempts

are made to alleviate the level of sensitivity in this area and will be discussed in the

following descriptions of the algorithms.

2.2.2.1 Method 1

The first algorithm for computing the solution makes use of both available opti­

mization codes: LSGRG2, which will minimize any objective function and de Boor's

code, which finds the least squares spline approximation of two-dimensional data. The

18

Page 25: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

first step in this algorithm is to write an objective function for LSGRG2. Basically,

the objective function, written in the C programming language, is passed the values

of the r 's and computes the value of Equation (2.1), which is the sum of the distances

from each data point to the corresponding point on the curve at rj for z = 1, m.

In other words, the objective function provides the following mapping.

n |2 (c,r) - ^ ^ | | s ( c , r , ) - fi

i=l

Within this objective function, the coefficients, {ajf^i, from Si(t) in Equation (2.2),

are found using the least-squares approximates code where the two-dimensional data

is given by {(Ti,Xi)}^^. The coefficients, {^^Jj^i, are found the same way except the

two-dimensional data in the least-squares problem is given by {(Ti.yi)}^^.

We can easily extend all the work done so far to a case involving three-dimensional

data. We would simply have one more ordinate in our data and thus, one more spline

function in our parametric curve. The result is an additional set of coefficients being

solved for within our objective function, and an additional term in our Euclidean

distance function. That is, we minimize the function

m

G{c,t) = > min ^ t e [ o , i ] 1=1

Y^ cxiBi^kAt) -Xi\ \y2 PiBi,kAt) -yi\ +

n 2 - 1

+ X^7i^i,fc,t(t) - Z^ \ j = l

where now, c = <{a}^^i, {P}^=i, {7}"=i [• Now. as before in the two-dimensional

case, this first method uses LSGRG2 to calculate the r 's and the coefficients are

calculated by the least-squares routine, e.g., the coefficients, {iJJLi are calculated

using data {(r^, 2;i)}£i.

2.2.2.2 Method 2

The second method is similar to the first in that the r 's are calculated by LSGRG2

when it minimizes the same objective function as in Method 1. The difference in this

19

Page 26: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

approach is that the spline coeflScients are included as variables to be optimized by

LSGRG2. They are not calculated using the least-squares routine. The objective

function has as its variables both the T's and the spline coefficients. As a result,

this method is entirely dependent on the ability of LSGRG2 to handle the minimiza­

tion problem. Should the optimizer have trouble with certain singularities or local

minimums in the objective function, this method would inherit the same problems.

In order to avoid this, we propose a global search for the r's, implemented in the

following method.

2.2,2,3 Method 3

This method begins, as all of them do, by selecting some arbitrary spline curve,

i.e., selecting some arbitrary set of spline coefficients, c. As with any optimization

problem, there is a degree of sensitivity attached to the initial guess. To slighth'

reduce this we begin this algorithm with an expensive search for each r such that

l | s(c ,r i )- / i l l = min | | s (c , t ) - / i l l , i = l,...,m. (2.5) te[o,i]

This is done by simply taking a sequence of 1000 equally-spaced points in [0,1] and

finding which value produces the minimum distance to the zth data point. This value

is assigned to r . Expensive as this search is, it is only performed initially when this

initial guess is selected. It would be plausible for smaller data sets to repeat this first

step with a finer mesh of points at various stages in the following iteration, but for

the numerical examples that follow this was not done, as it was deemed unnecessary.

The reason will become obvious in the explanation of the following iteration.

With the initial coefficients selected and an initial set of r 's selected, we perform

a two part iteration identical to the method described in [6]. The first part uses

LSGRG2 on each r for z = 1 , . . . , m, minimizing the distances between the zth data

point and the curve, s. In other words, LSGRG2 is run m times. Basically, we try to

accomplish the same task as above, shifting the r's so that the sum of the distances

from the points to the curve is minimized. The approach is now to use LSGRG2.

20

Page 27: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

(i.e., generalized gradient reduction) instead of just a mesh value search. The second

part of this iteration is to find the spline curve, using least squares, which minimizes

the distance from each of these selected r's to the given data points. So, in essence,

we iterate between a global search for the r 's and a least-squares approximation of

these values.

We emphasize again that the two-dimensional data points being approximated

by the least-squares routine are the parameterized coordinates of the data set. In

other words, we are approximating the data using < (Ti,Pj(fi)) > , where Pj(fi) is

the projection operation onto the j th ordinate of data point fi. In a case of two-

dimensional data, j = 1 or 2, and in the three-dimensional case, j = 1,2, or 3. We

have two and three different splines being computed, respectively. We continue the

iteration until one of three criteria is met: (1) a certain minimum error is reached,

(2) the change in error between iterations is less than some set tolerance for three

consecutive iterations, or (3) the number of iterations reaches some maximum value,

which allows us to control the amount of time spent in this approximation.

A global search for the r 's that minimize the distance from the data to the curve

introduces possible variations on Methods 1 and 2. These variations appear in our

final two methods.

2.2.2.4 Methods 4 and 5

Consider the first method's approach of using LSGRG2 to find the r's by minimiz­

ing an objective function, which calculates the spline coefficients using a least-squares

routine. In this case, we run the risk of selecting some initial curve in which LSGRG2

traps itself in a local minimum. Let us attempt to alleviate this problem by iterating

between a global search for the r's, as in Method 3, and a local minimization of the

r 's, as in Method 1. Essentially, in Method 4 we are attempting to make the algo­

rithm much less susceptible to a bad initial guess. We begin with the initial guess as

always. We perform the expensive mesh value search for the r's, obtaining an initial

guess for the r's. We then use LSGRG2 to find r in Equation (2.5) for z = 1 , . . . , m.

21

Page 28: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

Front Side

Figure 2.2: 3-D Butterfly Curve

We follow with Method 1, and iterate.

Based on the fact that Method 2 is only a slight variation on Method 1, we can

also iterate between the global search and Method 2. The result is a fifth method for

computing the solution.

2.3 Numerical Examples: Comparing Methods

In order to observe the behavior of our methods proposed above, we designed four

numerical examples. In each case we will look at how particular algorithms mentioned

previously compare in computing the solution.

2.3.1 Example 1, Simple Examples with Data Shuffiing

The first set of examples with which we test our algorithms is based on very simple

and harmless-looking parametric equations. For two-dimensional data, we used

x(t) = t (2.6)

y(f) = sign(/) • |/|2-65

where t G [—1,1], which produces the polynomial-like graph. In the three-dimensional

data case we use an example given by the LiveGraphics3d website [4], The following

parametric equations generate the butterfly curve in Figure 2.2:

22

Page 29: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

> for <t<— (2.7

x(t) = b(t)cos(t)

y(t) = b(t)sin(t)

z(t) = 0,5 b(t)

where b(t) = e' " * - 2 cos(4t) -h sin ( — j . In this example we look at how well

our algorithms compare in generating an approximation to the data generated by

the above equations. In general, we want to test how they might handle data with

no explicit parameterization. To test this, we shuffle the data in various ways and

compare how well they approach the actual curve. There are flfteen data points in

the following example, approximated by parabolic splines.

As we might expect, the computation of the solution to the unshuffled case is

not difficult for any of our algorithms. As you can see in Table 2.1, the final error

and computation time are very small for all methods in both the two-dimensional

and three-dimensional case. For these particular examples, we also compare the

algorithms when we increase the dimension of the spline from 1/3 of the number of

data points to 2/3.

We observe that in most cases. Method 4 was able to best approximate the data

and it was always a slight improvement over Method 1, as it is merely a variation of it.

In addition, we observe that Method 5 is always a slight improvement over Method

2 for the same reason. In each case with only one exception. Method 3 produces

worse results than either Methods 4 and 5. It is only in the final case that we see

that Method 3 out-performs Methods 4 and 5. In Figure 2.3, we place the results

from Methods 4 and 5 in the two-dimensional case in which we clearly observe a good

approximation. We omit any three-dimensional visualizations in this example due to

the fact that such images are mostly indiscernible on two-dimensional paper. In later

examples the three-dimensional visualization is much more appropriate and will be

given there.

So far we have seen that in these very simple cases, parametric approximation of

data using ODR splines is effective. Now let us take the same data as before but

23

Page 30: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

Table 2.1: Unshuffled Data in Example 1

Two Dimensions: Smaller Spline Space

Final Error

Average Error per Data pt.

Computation Tinne (sec)

Method 1

0.1040

0.0069

0.06

Method 2

0.0285

0.0019

0.08

Method 3

0.1036

0.0069

1.43

Method 4

0.1000

0.0067

2.75

Method 5

0.0285

0.0019

0.60

Two Dimensions: Larger Spline Space

Final Error

Average Error per Data pt.

Connputation Tinne (sec)

Method 1

0.0038

0.0003

0.03

Method 2

0.0091

0.0002

0.12

Method 3

0.0282

0.0019

0.60

Method 4

0.0025

0.0002

1.05

Method 5

0.0090

0.0006

0.83

Three Dimensions: Smaller Spline Space

Final Error

Average Error per Data pt.

Computation Time (sec)

Method 1

6.7265

0.2691

0.31

Method 2

7.4106

0.2964

0.36

Method 3

8.2712

0.3308

19.55

Method 4

6.2664

0.2507

31.47

Method 5

7.3973

0.2959

1.76

Three Dimensions: Larger Spline Space

Final Error

Average Error per Data pt.

Computation Time (sec)

Method 1

0.5889

0.0236

0.31

Method 2

3.0252

0.1210

0.61

Method 3

0.7047

0.0282

20.92

Method 4

0.5884

0.0235

1.82

Method 5

3.0185

0.1207

2.48

alter it by shifting the order of the data points. We maintain the same initial guesses

and the same spline dimensions and order as before but merely swap data points.

We explore three different cases: We first look at swapping nearby data points, by

swapping the third and the fourth data point from the previous fifteen. Second, we

swap distant points, the third and the fourteenth points. Third, we choose the data

points in random order. In Figure 2.4, we compare swapping nearby and distant data

as well as random order points using each method.

We observe several interesting occurrences in the shuffling of data. With the

exception of nearby swapping, Methods 1 and 2 do not handle unordered data well.

24

Page 31: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

Method 4 Method 5

/

/

Figure 2.3: Example 1 with Unshuffled Data, Methods 4 and 5

The erratic behavior we see in rows one and two can be attributed to the limitations

of LSGRG2, or any optimizer that takes a descent path. It possibly settles into a local

minimum while shifting the r's, never actually completely reordering them. Swapping

two nearby r 's is apparently not difflcult, but swapping them in a distant case or a

random order case is beyond the scope of these algorithms. Nevertheless, we see that

generally only Method 5, which begins with an expensive search for the r's, tends

to handle non-parameterized data well, at least with respect to the jumbled order in

which data might occur,

2.3.2 Example 2, Vertical Data

The second example that we considered involves the question of existence of the

solution to Problem (2.2), as well as its three-dimensional equivalent. We look at a

two-dimensional and three-dimensional example, both with what we will refer to as

vertical data.

Consider first the two-dimensional data (xi,yi) for i = 1,... ,ib where

Xi = -0.75, Vi = 0.75 + 0.375i, i = l,...,b,

Xi = 0.0, Vi = 0.75 + 0.375(i - 5), z = 6 , . . . , 10,

2: = 0.75, i/ = 0,75 + 0.375(z-10), z = l l , . . . , 1 5 .

25

Page 32: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

Nearby Shuffling

Total Error = 0.0381

Method 1

/

Distant Shuffling

Total Error = 2.0021

/

_/"

y

• / > " ,

• X - ' y ^y

^

' ' y-^

,/ yT ^ ' , - • : • <

/ .." 'y -^ x ' ' .

;?=>; '

Random Order

Total Error = 1.1300

/f-'.-

Total Error = 0,0327 Total Error = 0.6316

Method 2

y)

Total Error = 0.1167

Method 3

/

/ /

Total Erro

^-'''

= 0.9541

/

/ y

Total Error = 0.0967 Total Error = 0.3456

Method 4

Total Error = 0.0537 Total Error = 0.0310

Method 5

+^

Total Error = 0.5983

~7^

Total Error = 0.9841

Total Error = 0.6242

Total Error = 0.0262

Figure 2,4: Example 1 with Shuffled Data, All Methods

26

Page 33: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

0 Iterations

100 Iterations 1

1 Iteration

400 Iterations

10 Iterations

I

- /

1600 Iterations

Figure 2.5: Example 2, two-dimensional Vertical Data

In this case we are most interested in looking at the behavior of the calculated solution.

We search for any indication that the solution might be unbounded. .A.s we are limited

in the viewing of such solutions by the behavior of LSGRG2 in Methods 1, 2, 4 and

5, we will only use Method 3 for this particular example. Our results in this example

parallel the results found in [6]. These result suggest that despite the error which

approaches zero, the maximum magnitude of the coefficients of the spline curves

increase, seemingly without bound. See our results in Table 2.2 and Figure 2.5. We

present graphs in Figure 2.6 that plot the number of iterations against the maximum

magnitude of the spline coefficients in the normal manner as well as a logarithmic

manner. We observe that the coefficients appear to be growing without bound. If

this is the case, we must assume the actual minimum for which we are solving does

not exist. An explanation of this claim is given in [6].

Let us now consider a similar example extended into a three-dimensional case.

Again, we set up the data as vertical columns but we make sure to establish the

27

Page 34: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

Table 2.2: Example 2, two-dimensional Vertical Data

Iterations

1

10

100

400

800

1600

Final Error

6.7103

1.0786

0.3453

0.1849

0.1348

0.0972

Coef. Magnitude

1.1422

3.2420

10.2929

20.6460

29.18

41.22

CPU Time

0.23

1.81

12.36

45.86

93.87

191.09

1000

Iterations

2000

1.8

1.6 -

« 1.4

•c 1 2 H 09 ra 5 1

I 08 £ S 06

f 04 _i

02

0 0.5 1 1.5 2

Log(lteration$)

2.5

/

35

Figure 2,6: Example 2, Seemingly Unbounded Spline Coefficients

columns as non-coplanar. If the columns did lie in the same plane, we would merely

have the case we considered above. Let us consider an extension of this case by using

the data (xi, yi, z ) for z = 1 , . . . , 30 such that:

X , -0 .0 , yi = 0.0, z^ = h, z = l , . . . , 10 ,

JU 7 = 0.0, y^ = l.O, z, = i ( z - 1 0 ) , z = l l , . . . , 2 0 . (2.8)

x, = 1.0, z/, = 0.0, z, = | ( z - 2 0 ) , z = 21, . . . ,30.

In approximating this data using ODR splines, we find the same as before, that

the coeflScients seem to grow without bound. In Figure 2.7, we again illustrate that

the maximum magnitude of the spline coefficients grow at least logarithmically. We

also see the same behavior in Figure 2.8 as before.

28

Page 35: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

1600 0,5 1 15 2 2,5

Log(lterations)

Figure 2.7: Example 2, Unbounded Coefficients in 3-D Case

Table 2.3: Example 2, three-dimensional Vertical Data

Iterations

1

10

100

400

800

1600

Final Error

28.0876

0.1115

0.0289

0.0148

0.0105

0.0073

Coef. Magnitude

1.1576

1.6510

2.0046

2.3870

2.6389

2.9420

(sec. on Pentium III SOOmhz)

CPU Time

0.55

2.97

23.34

89.53

186.25

370.36

The main difference between this three-dimensional example and the two-dimensional

case earlier is that the latter possesses coefficients that grow much faster. However, as

we illustrated in Figure 2.7, the coefficients are still growing in an unbounded manner.

Table 2.3 contains the same information as Table 2.2 but for the three-dimensional

case. Again, we observe that the increase in spline coefficient magnitude is small,

however, we have already seen that this increase is still at least logarithmic. We see

that both cases lead us to the following claim.

Claim 2.1. Given the data in Equation (2.8), Problem (2.2) is unbounded and a

solution does not exist, i.e., the approximation problem using parameterization and

29

Page 36: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

1 Iteration

100 Iterations

1600 Iterations

/ .(

! I

Figure 2.8: Example 2, three-dimensional Vertical Data

ODR splines does not have a solution due to the fact the maximum magnitude of the

spline coefficients approaches infinity as the error approaches zero.

2.3.3 Example 3, Helix Data with Noise

In data collection, it is often the case that a certain amount of noise interference

is inevitable. Using data from the typical helix given by the parametric equations in

Equation (2.9), we add ambient random noise to determine how well the structure of

the helix is maintained while being approximated by each of our methods.

X = cos(t)

> for 0 < t < 47r. (2.9) y = sin(t)

z = t

The basic helix is given in Figure 2.9. To simulate noise at three levels, we merely

generate random three-dimensional vectors with length less than some maximum

30

Page 37: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

Top View

Figure 2.9: Basic Helix

magnitude, and add each vector to a corresponding point. We do this with three

different levels noise, |MAX LENGTH| = 0.05, 0.1, and 0.25. In general we notice

that each method performs similarly in approximating the data. As we saw in the

first example. Methods 4 and 5 tend to produce the best accuracy with respect to

the data. Nevertheless, it is possible that the error to minimize the error but lose

the structure of the helix is lost. In Figure 2.10, we compare how Method 5 handles

the various noise levels. There is some degree of distortion, but this method seems

to pick up the structure of the helix very well. Higher amounts of noise, as we would

expect, produce some variation in the curve, but as the figure illustrates this variation

is slight and barely noticeable.

2.3.4 Example 4, Random Data with Varying Initial Guesses

In the final example for the univariate parameterization of data, we want to illus­

trate the sensitivity of each method to the initial guess. To do so, we use as our data a

set of random data points within the region [ — 1,1 ] x [ — 1,1 ] for the two-dimensional

case and the region [ 0,1 ] x [ 0,1 ] x [ 0,1 ] for the three-dimensional case. We hope

to see that the change in the resultant curve in each method is minimal and that the

initial guess could vary, still producing the same curve. Despite our best hopes, the

final results in each case did vary (significant!}', in some cases) based on a change in

the initial guess. In other words, the quality of the approximation is. to some degree.

31

Page 38: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

C^--. \

\ V \

\ \ " ,

No Noise

yM \^^ J

Max Noise = 0.05

f - -

• ' ' • " " ^ - ^

\

I *

\>^-

Max Noise = 0.1

/ / 11 \ '• \

% \

J

Max Noise = 0.25

Figure 2.10: Example 3, Noisy Helix Data Approximated by Method 5

dependent on the quality of the initial curve with which we begin our iterations.

The reason we choose random data to approximate is to simulate a data set in

which there is no explicit parameterization and a completely unknown structure is

represented. The default initial guess in each of the above cases is a line that passes

through the first data point and the farthest point from it. In general, the coefficients

of the splines are initially selected as linearly increasing from 0 to 1. We compare this

in the two-dimensional case to selecting a spline curve that produces the line y = x.

where x G [0,1] as well as the spline curve that best approximates the unit circle.

32

Page 39: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

Method 4

Method 5

Figure 2.11: Example 4, Random 2-D Data with Varying Initial Guesses

That is.

X = cos(t) where 0 < i < 27r,

y = sin(t)

In the first two methods, we are also required to make an initial guess as to

the value of the r 's . We eliminate the need for this in the last three methods by

performing the expensive global search for the r ' s based on the initial curve which

we select. Because Methods 4 and 5 are always improvements on Methods 1 and 2,

we omit the consideration of the first two methods. In Figure 2.11, we see the two-

dimensional case in Methods 4 and 5 and how they vary with respect to the initial

guess. Keep in mind that they appear in the following order: default linear guess,

y = X, and the unit circle.

A better data set for this particular issue of varying the initial guess might be

one like the first example where there is actually some curve on which the data lies.

Figure 2.12 illustrates that in using Method 5 and varying the initial guess as in the

previous manner, in some cases, alters the result. The data approximated in this

figure is identical to the same randomly shuffled data in the last part of Example 1,

given by the parametric equations (2.6).

33

Page 40: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

/

/

Default Guess y = x Unit Circle

Figure 2.12: Varying Initial Guesses on Shuflfled Data in Example 1

Overall, we have attempted to demonstrate that these methods can sufflciently

approximate data with no explicit parameterization. In Example 1, we see that the

order of the data has little affect on the quality of the approximation in Methods 3,

4, and 5, but the optimization program LSGRG2 limits the effectiveness of Methods

1 and 2 in shifting order. In Example 2, we see that the error can be minimized by

continuing the iterations in Method 3, nevertheless the actual inflmum in Problem

(2.2) may not even exist. We see, also, that despite noisy data, this method tends to

maintain the structure of the data very well, as shown in Example 3. Finally, as is

the case with almost every optimization problem, the quality of the result depends

on the quality of the initial guess, as seen in Example 4.

34

Page 41: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

CHAPTER 3

AN EXTENSION TO THE BIVARIATE APPROXIMATION PROBLEM

3.1 Statement of the Problem

In this chapter we now address a whole new set of issues by extending our pa­

rameterization to two variables as opposed to one in the previous chapter. The result

is that instead of approximating a set of data with a curve through space, we now

attempt to approximate the data using tensor product splines which will ultimately

produce a surface approximation. Let us begin by saying that this chapter merely

skims the surface of this topic and leaves a wide realm of research open for future

work by the author. Surface approximation of data introduces a variety of geometric

and topological issues. A multitude of methods, using splines and other tools, present

themselves as viable options to produce the manifold that best approximates a set of

data. Nevertheless, due to limited time and extent of this thesis, we have restricted

our work to considering a very basic method built on the tensor product splines as

described in [1], We will look at only a few examples to witness the success of the

approximation production of an appropriate structure based on the data presented.

Again, keep in mind that this is a very limited discussion that serves to open the door

to a much broader subject of research.

Let us consider the case where we are given a set of three-dimensional data,

(xi,yi, Zi) for i = 1,..., m. It is our ultimate goal to flnd a surface that best approx­

imates the structure of this data. Let us take U = S/j,s and V = §fc,t where U and V

are of dimension ni and n2, respectively. This implies that s and t are knot sequences

of length ni-\- h and rz2 + k, respectively, where h and k are the spline orders. Let us

deflne the general tensor product of spaces U and V, denoted ?7 0 U, as

{ n i 712

y ^ y^aij{ui 0 Vj) : a j eR,Ui G U, Vj e\' Vz, j i=i j=i

In this deflnition Ui®Vi is the usual tensor product given as (u®v^(x. y) = u(x)v(y).

for all (x,y) e X xY. Basically, this implies that we can evaluate the tensor product

35

Page 42: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

spline w e U ^V at some point (a, b) by

w (^^b) = [Ei,j<^i3Brj) (a,b) = f2 [f2<^r3Bj,kAb)] B,,,A^) i=l \j=l I

or (3.1)

712 / n\

= Yl ( X]« j ,/l,s(« ) J Bj,kAb)-i=j \ i=l )

To produce the approximation surface, we are going to minimize the following

function:

G'(c,r) = 5 ] ^ ^ (Xi - Wi(Tii,T2i)) ^-(yi-W2(Tii,T2iSf^

(zi -wATu,T2i))' (3.2)

In this equation, we define c as the set of coefficients of the tensor splines wx,W2, and w^ G

IJ ®V. The array r is defined as r = {{T\i,T2i)Yi^\, which contains the points at

which the splines are to be evaluated. As almost a parallel to the Problem (2.2) we

attempt to find the splines that minimize the distance from each data point to the

surface defined by the parametric equations

X = W\(a,b)

y = W2(a,b)

X = ws(a, b)

> for (a,b) G [0,1] x [0,1]. (3.3)

3.2 Description of Method

The method we use to calculate the minimal surface in Equation (3.2) utilizes

LSGRG2 for a two-part iteration. This method is very similar to Method 3 in Chapter

2 except that we do not make use of any sort of least-squares approximation to

determine the surface of minimal error. Our motivation for excluding this type of

routine lies in our desire to not restrict the r 's to points on a grid but to points

within the unit square parametric region. The routine provided by [1] does not allow

this freedom. We use LSGRG2 to utilize this freedom.

36

Page 43: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

Our method is a two part iteration repeated until some error tolerance level is

reached. The first part of the iteration is to find the values of each pair of r ' s using

LSGRG2. We find the point (rn,r2i) that produces the point on the surface in

Equation (3.3) closest to the data point (a:,,?/^, Zj) for z = 1 , . . . , m. In the second

part of the iteration we use LSGRG2 to find the surface which best approximates

the data, where the variables sent to LSGRG2 are the spline surface's coefficients,

c, as defined above. By repeating this iteration we hope to close in on some surface

that best approximates the data. In both parts of the iteration the objective function

which LSGRG2 minimizes is given by Equation (3.2).

3.3 Numerical Examples

In each of the following examples, we try to show this method does actually

preserve some level of structure when given a reasonable set of data. It may be

clear that each data set is very structured and is generated by parametric equations.

We limit our numerical examples to very nice parametric surface data to show that

this method does not deal with any of the difficult issues that lie in the geometric

and topological structure of more complicated data. This exposition serves only to

introduce the possibility of more research in this area.

3.3.1 Example 1, A Paraboloid

This first example is a very simple paraboloid generated by the parametric equa­

tions:

X = v^cos(t)

y = yss in( t )

z = s

This example allows us to examine how well the above method might approximate an

open surface, by which we mean, some surface that does not enclose some region in

space. A paraboloid is such a surface as it is open at one end. In Figure 3.1, we see

how the final approximated surface compares with the original paraboloid. In this

37

> for 0 < t < 27r,0 < s < 2.

Page 44: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

Paraboloid Approx. Paraboloid

Figure 3.1: Example 5, Paraboloid and Approximated Paraboloid

Figure 3.2: Example 5, Progression toward Approximation of Paraboloid

example, we use 49 data points with m = 77-2 = 5 and h = k = A, i.e., cubic tensor

splines. As you can see, there is very little difference in the finished product and the

original paraboloid. To emphasize the second function as an approximation, Figure

3.2 shows a sequence of four surfaces that LSGRG2 generates during its minimization

of the objective function.

3.3.2 Example 2, Ring Torus

This example examines how well our approximation method might approximate a

closed surface, that is some surface that completely encloses some region in space. A

ring torus is one example of this type of surface. The ring torus in Figure 3.3, which

is side by side with the final approximated surface, is generated by the parametric

equations:

X = (c -I- a cos(t)) cos(s)

y=(^c + acos(t))sin(s) > where a = 0.5, b = 1, 0<s.t<27r. (3.4)

z = asin(t)

As in Figure 3.2, the frame-by-frame advance in Figure 3.4 shows the progression

LSGRG2 makes towards the approximation.

38

Page 45: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

Ring Torus Approx. Ring Torus

Figure 3.3: Example 5, Ring Torus and Approximated Ring Torus

Figure 3.4: Example 5, Progression toward Approximation of Ring Torus

3.3.3 Example 3, Spindle Torus

The final example challenges the approximation curve by using data from a para­

metric surface which intersects itself. The spindle torus is defined by the same equa­

tions as the ring torus in Equation (3.4) with the exception of a = 1 and c = 0.5.

Changing the values of these parameters has the effect of squeezing the ring torus

upon itself, producing the image in Figure 3.5, which shows an outside and an inside

view of the spindle torus. By approximating 49 data points on the surface of the

spindle torus with tensor cubic splines of dimension ni = 77,2 = 5 we arrive that the

approximated surface in Figure 3.6.

The results of these examples demonstrate that in very simple data structures.

Inside of Spindle Torus

Figure 3.5: An Outside and Inside View of the Spindle Torus

39

Page 46: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

Figure 3.6: Example 5, An Approximated Spindle Torus from Outside and Inside

without any shuffling of data, LSGRG2 produces a quality approximation that main­

tains the structure of the data. We emphasize, once again, that much more work re­

mains to be done in this subject dealing with many unaddressed issues but this short

exposition has served to introduce the possibility of approximating non-parameterized

data with surfaces produced by a bivariate parametric approximation method.

One particular application of such a method arises the held of computer imag­

ing. Devices exist that utilize lasers to create a point cloud of coordinate data that

represents some object. That is, it can produce a large data set of coordinates of

any object it scans. For example, one such scan was done of a ranch house producing

millions of points on the surface of the house and surrounding trees. Currently, crude

methods exists for converting this data set into a surface to be visualized in computer

visualization environment, including shape recognition and a shrink wrapping tech­

nique that makes use of every data point in the data set. Considering the size of the

data set, it would be advantageous to approximate the data by some surface which

reduces the amount of data needed to produce the surface from the millions of data

points to some optimized set of coefficients. Clearly in the case of an object such as a

tree, many issues arise such as can a method of surface approximation be developed

to maintain the structure of the branches at the finest level. Future work by the

author includes developing such a method.

40

Page 47: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

CHAPTER 4

THEORETICAL AND COMPUTING EXTENSION OF THE PARAMETRIC

APPROXIMATION PROBLEM

4,1 Minimal Order of Parametric Approximation of Data Using Polynomials

Through the course of investigating many of the possible methods for approximat­

ing data, several questions arose that could be dealt with theoretically. Many of the

questions developed reduced to previously answered questions based on the proper­

ties of B-Splines and other approximation techniques. However, one particular issue

presented itself and remains unanswered with respect to the research of the author.

We include this at the close of this paper as a possible topic of other articles or future

work by the author.

Consider some data set of arbitrary dimension fi = (xi,X2, • • • ,Xd)i for i =

1 , . , . , m. We wish to solve the following approximation problem: Find p* such that

^ (p* , / ) = inf | ^ ( p , / ) : p = {puP2,...,Pd},P^ern\fi = l,...,d]. (4.1) p, r L J

m

where E is defined as E(g, / ) = ^ ||/i - g(Ti)\\^. This is actually the same approx-

imation with which this entire paper dealt with the following exception: instead of

restricting our solution functions to the spline space, we now restrict them to the of

polynomials of order n. The question to be answered in this case is: What is the

minimum value of n such that the parametric curve p* interpolates / . This question

is not as simple as it first appears particularly due to the fact that we not only have

the freedom to choose the coefficients of the polynomial in minimizing the objective

function but also the evaluation points, r, within the parametric interval [0,1). With

this added freedom and arbitrary dimension, we have a much more diflacult problem

to solve.

41

Page 48: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

4.2 Possibilities for High Performance and Parallel Computing

Most of the research for this paper was performed at the High Performance Com­

puting Center (HPCC) at the Reese Development Center, a department of Texas

Tech University. For this particular paper, many of the advantages available in high

performance computing such as a parallel programming were not utilized, specifically

because most of the examples and data sets with which we dealt were small enough

that any advantages would have been insignificant. In the case of large data sets.

such as data collected by laser scanners, high performance computing would serve a

great advantage in the speed of calculating the approximated surfaces and displaying

them in a computer environment. In this final section of the paper we draw attention

several places in the approximation process where parallelism could greatly reduce

the CPU time needed to calculate the solution.

The first, and probably the most useful place for parallelism and optimization,

appears in our Methods 3, 4 and 5 in Chapter 2. At the outset of the algorithm we

perform an expensive global search for the r^'s that produce the points on the curve

closest to each data point. Each search for z = 1, . . . ,m is completely independent

of the others allowing for a complete division of the work over multiple processors

or threads. In an environment where multiple processors are available, this amounts

to sending to each processor a subset of the r 's. Each processor performs the search

and returns the appropriate values of the r 's to the main, or parent, processor. This

could also be done in the bivariate parametric approximation method, in the first part

where the r 's are globally search for by LSGRG2, Each processor would perform the

search using LSGRG2 for their set of r 's, returning the appropriate values.

Another advantageous application of parallel programming lies in the actual com­

putation of the final approximated surface. Once the coefficients for the splines have

been calculated for the best approximation to the data, in both the univariate and

bivariate cases, a large amount of processing time is spent in the calculation of the

surface and its visualization. Depending on the size of the surface and the resolution

chosen at which to view it, the amount of work in this section can be enormous. It

42

Page 49: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

is possible to divide up the parametric interval, or region, among various processors

to compute the values needed to visualize the data. Once computing, we can take

advantage of some of the high performance graphics capabilities of a computer such

as the SGI Origin 2000 multi-processor supercomputer at the HPCC. The specially

designed graphics pipes allow for all the necessary computation relating specifically

to the visualization of data to be done separate from the rest of the processors, which

allows for the computation of the splines to be done simultaneous to the visualization

of its results, thus speeding up any sort of graphics projection.

The remaining opportunities for parallel programming lie in re-programming the

methods of optimization used in this paper. Specifically the two sources that would

need to be evaluated and re-programmed to utilize parallelism are LSGRG2 and the

spline library given in [1], Because these are extensive libraries and routines, much

time and work would need to be spent in such a project. Plus, despite any advantages

of implementing parallelism in these routines, one would have to be very cautious in

utilizing the versions especially if implementing the previously suggested methods of

parallel programming. Nested parallelism can produce unexpected results as well as

over-exhausted system resources within the super-computer.

Overall, the methods presented in this paper have great opportunity for taking

advantage of developments in high performance computing. This combined with the

problem in the minimal orders of parametric approximation using polynomials as well

as the issues involved in surface approximation of data leave a great arena of research

left to be done in the future.

43

Page 50: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

BIBLIOGRAPHY

[1] De Boor, Carl, A Practical Guide to Splines, Springer-Verlag, New York, 1978.

[2] Harbison, Samuel P. and Steele, Guy L., Jr., C, A Reference Manual, Prentice Hall, Englewood Cliffs, 1991.

[3] Kincaid, David and Cheney, Ward, Numerical Analysis, 2nd Ed., Brooks/Cole, Pacific Grove, 1996,

[4] Kraus, Martin, LiveGraphics3D Example: A Butterfly, http://wwwvis.informatik.uni-stuttgart,de/~kraus/LiveGraphics3D/ examples/butterfly.html, 1998.

[5] "LSGRG2 Product Information," Windward Products, Windward Technologies, Inc., http://www.maxthis.com/lsgrg2.htm.

[6] Marin, Samuel P. and Smith, Philip W., "Parametric approximation of data using ODR splines," Computer Aided Geometric Design, No. 11, 1994, pp. 247-267.

[7] Marrin, Chris and Campbell, Bruce, Teach Yourself VRML 2, Sams.net, Indi­anapolis, 1997,

[8] Prata, Stephen, C Primer Plus, 3rd Ed., Sams, Indianapolis, 1999.

[9] Smith, S. and Lasdon, L.S., "Solving Large Sparse Nonlinear Programs Using GRG", ORSA Journal on Computing, Vol. 4, No. l,Winter 1992, pp. 1-15.

44

Page 51: PARAMETRIC APPROXIMATION OF DATA USING A THESIS

PERMISSION TO COPY

In presenting this thesis in partial fulfillment of the requirements for a

master's degree at Texas Tech University or Texas Tech University Health Sciences

Center, I agree that the Library and my major department shall make it freely

available for research purposes. Permission to copy this thesis for scholarly

purposes may be granted by the Director of the Library or my major professor.

It is understood that any copying or publication of this thesis for financial gain

shall not be allowed without my further written permission and that any user

may be liable for copyright infringement.

Agree (Permission is granted.)

Disagree (Permission is not granted.)

Student's Signature Date