Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases....

48
\documentclass{article} \ usepackage{hyperref,color,amsmath,amsxtra,amssymb,latexsym,amscd,amsthm,amsfon ts,graphicx} \numberwithin{equation}{section} \title{\Huge{Project on $\star$ Hermite interpolating polynomial}} \author{\textsc{Nguyen Quan Ba Hong}\footnote{Student ID: 1411103.}\\ \textsc{Doan Tran Nguyen Tung}\footnote{Student ID: 1411352.}\\ {\small Students at Faculty of Math and Computer Science,}\\ {\small Ho Chi Minh University of Science, Vietnam} \\ {\small \texttt{email. [email protected]}}\\ {\small \texttt{email. [email protected]}}\\ {\small \texttt{blog. \url{http://hongnguyenquanba.wordpress.com}} \ footnote{Copyright \copyright\ 2016 by Nguyen Quan Ba Hong, Student at Ho Chi Minh University of Science, Vietnam. This document may be copied freely for the purposes of education and non-commercial research. Visit my site \texttt{\ url{http://hongnguyenquanba.wordpress.com}}to get more.}}} \begin{document} \maketitle \begin{abstract} This paper contains my team's notes about \textit{the Hermite interpolating polynomial} to represent slides in Numerical Analysis 1 class. \ footnote{Contact us to get this file and tex source of this file.} \end{abstract} \newpage \tableofcontents \newpage \section{Introduction} \subsection{Historical notes} \subsection{A glance at Hermite interpolating polynomial} The Lagrange interpolating polynomial, $P_n(x)$, has been defined so that the polynomial agrees with the original function $f(x)$ at $n + 1$ distinct input values $x_0, x_1, \hdots , x_n$. On the other hand, Taylor polynomials approximate a function using a \textit{single} center point at which we know the value of the function and the value of several derivatives. Our goal is to generalize both the Lagrange polynomial and the Taylor polynomial by forming an interpolating polynomial that agrees with the function both at several distinct points and at a given number of derivatives of the function at those distinct points. A polynomial that satisfies these conditions is called an \ textit{osculating polynomial}. \section{Introduction for Interpolation} \textsc{Basic interpolating problem.}\\ For given data \[\left( {{t_1},{y_1}} \right),\left( {{t_2},{y_3}} \right), \ldots ,\ left( {{t_m},{y_m}} \right),{t_1} < \cdots < {t_m}\] determine function $f:\mathbb{R} \to \mathbb{R}$ such that \[f\left( {{x_i}} \right) = {y_i},i = 1, \ldots ,m\]

Transcript of Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases....

Page 1: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

\documentclass{article}\usepackage{hyperref,color,amsmath,amsxtra,amssymb,latexsym,amscd,amsthm,amsfonts,graphicx}\numberwithin{equation}{section}\title{\Huge{Project on $\star$ Hermite interpolating polynomial}}\author{\textsc{Nguyen Quan Ba Hong}\footnote{Student ID: 1411103.}\\\textsc{Doan Tran Nguyen Tung}\footnote{Student ID: 1411352.}\\{\small Students at Faculty of Math and Computer Science,}\\ {\small Ho Chi Minh University of Science, Vietnam} \\{\small \texttt{email. [email protected]}}\\{\small \texttt{email. [email protected]}}\\{\small \texttt{blog. \url{http://hongnguyenquanba.wordpress.com}} \footnote{Copyright \copyright\ 2016 by Nguyen Quan Ba Hong, Student at Ho Chi Minh University of Science, Vietnam. This document may be copied freely for the purposes of education and non-commercial research. Visit my site \texttt{\url{http://hongnguyenquanba.wordpress.com}}to get more.}}}\begin{document}\maketitle\begin{abstract}This paper contains my team's notes about \textit{the Hermite interpolating polynomial} to represent slides in Numerical Analysis 1 class. \footnote{Contact us to get this file and tex source of this file.}\end{abstract}\newpage\tableofcontents\newpage\section{Introduction}\subsection{Historical notes}\subsection{A glance at Hermite interpolating polynomial}The Lagrange interpolating polynomial, $P_n(x)$, has been defined so that the polynomial agrees with the original function $f(x)$ at $n + 1$ distinct input values $x_0, x_1, \hdots , x_n$. On the other hand, Taylor polynomials approximate a function using a \textit{single} center point at which we know the value of the function and the value of several derivatives. Our goal is to generalize both the Lagrange polynomial and the Taylor polynomial by forming an interpolating polynomial that agrees with the function both at several distinct points and at a given number of derivatives of the function at those distinct points. A polynomial that satisfies these conditions is called an \textit{osculating polynomial}.\section{Introduction for Interpolation}\textsc{Basic interpolating problem.}\\For given data \[\left( {{t_1},{y_1}} \right),\left( {{t_2},{y_3}} \right), \ldots ,\left( {{t_m},{y_m}} \right),{t_1} < \cdots < {t_m}\]determine function $f:\mathbb{R} \to \mathbb{R}$ such that\[f\left( {{x_i}} \right) = {y_i},i = 1, \ldots ,m\]$f$ is \textit{interpolating function}, or \textit{interpolant}, for given data. Additional data might be prescribed, such as slope of interpolant at given points. Additional constraints might be imposed, such as smoothness, monotonicity, or convexity of interpolant. $f$ could be function of more than one variable.\\\\\textsc{Purposes for Interpolation}\begin{enumerate}\item Plotting smooth curve through discrete data points.

Page 2: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

\item Reading between lines of table.\item Differentiating or integrating tabular data.\item Quick and easy evaluation of mathematical function.\item Replacing complicated function by simple one.\end{enumerate}\textsc{Interpolation vs Approximation}\begin{enumerate}\item By definition, interpolating function fits given data points exactly.\item Interpolation is inappropriate if data points subject to significant errors.\item It is usually preferable to smooth noisy data.\item Approximation is also more appropriate for special function libraries.\end{enumerate}\textsc{Choosing Interpolant}Choice of function for interpolation based on:\begin{enumerate}\item How easy interpolating function is to work with:\begin{itemize}\item determining its parameters.\item evaluating interpolant.\item differentiating or integrating interpolant.\end{itemize}\item How will properties of interpolant match properties of data to be fit (smoothness, monotonicity, convexity, periodicity, etc.)\end{enumerate}\subsection{Functions for Interpolation.}Families of functions commonly used for interpolation include:\begin{itemize}\item Polynomials.\item Piecewise polynomials.\item Trigonometric functions.\item Exponential functions.\item Rational functions.\end{itemize}\textsc{Basis functions}Family of functions for interpolating given data points is spanned by set of \textit{basis functions} \[{\phi _1}\left( t \right), \ldots ,{\phi _n}\left( t \right)\]Interpolating function $f$ is chosen as linear combination of basis functions,\[f\left( t \right) = \sum\limits_{j = 1}^n {{x_j}{\phi _j}\left( t \right)} \]Requiring $f$ to interpolate data $(t_i,y_i)$ means\[f\left( {{t_i}} \right) = \sum\limits_{j = 1}^n {{x_j}{\phi _j}\left( {{t_i}} \right) = {y_i},i = 1, \ldots ,m} \]which is system of linear equation $A\mathbf{x}=\mathbf{y}$ for $n$-vector $\mathbf{x}$ of parameters $x_j$, where entries of $m \times n$ matrix $A$ are given by ${a_{ij}} = {\phi _j}\left( {{t_i}} \right)$. \\\\\textsc{Existence, Uniqueness and Conditioning.}\\Existence and uniqueness of interpolant depend on number of data points $m$ and number of basis function $n$.\begin{enumerate}\item If $m>n$, interpolant usually doesn't exist.\item If $m<n$, interpolant is not unique.\item If $m=n$, then basis matrix $A$ is nonsingular provided data points $t_i$ are distinct, so data can be fit exactly.\end{enumerate}

Page 3: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

Sensitivity of parameters $\mathbf{x}$ to perturbations in data depends on $cond(A)$, which depends in turn on choice of basis functions.\\\\\textsc{Polynomial Interpolation.}Simplest and most common type of interpolation uses polynomials. Unique polynomial of degree at most $n-1$ passes through $n$ data points $(t_i,y_i), i=1,\hdots,n$, where $t_i$ are distinct. There are many ways to represent or compute interpolating polynomial, but in theory all must give same result.\\\\\textsc{Monomial Basis.}\\\textit{Monomial basis functions} ${\phi _j}\left( t \right) = {t^{j - 1}},j = 1, \ldots ,n$ gives interpolating polynomial of form\[{p_{n - 1}}\left( t \right) = {x_1} + {x_2}t + \cdots + {x_n}{t^{n - 1}}\]with coefficients $\mathbf{x}$ given by $n \times n$ linear system\[A\mathbf{x} = \left[ {\begin{array}{*{20}{c}}1&{{t_1}}& \cdots &{t_1^{n - 1}}\\1&{{t_2}}& \cdots &{t_2^{n - 1}}\\ \vdots & \vdots & \ddots & \vdots \\1&{{t_n}}& \cdots &{t_n^{n - 1}}\end{array}} \right]\left[ {\begin{array}{*{20}{c}}{{x_1}}\\{{x_2}}\\ \vdots \\{{x_n}}\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}{{y_1}}\\{{y_2}}\\ \vdots \\{{y_n}}\end{array}} \right]\]Matrix of this form is called \textit{Vandermonde matrix}. Solving system $A\mathbf{x}=\mathbf{y}$ using standard linear equation solver to determine coefficients $\mathbf{x}$ of interpolating polynomial requires $\mathcal{O}(n^3)$ work.\\For monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for linear system solution will be small. But it does mean that values of coefficients are poorly determined. Both conditioning of linear system and amount of computational work required to solve it can be improved by using different basis. Change of basis still gives same interpolating polynomial for given data, but \textit{representation} of polynomial will be different.\\Conditioning with monimial basis can be improved by shifting and scaling independent variable $t$.\[{\phi _j}\left( t \right) = {\left( {\frac{{t - c}}{d}} \right)^{j - 1}}\]where, $c = \frac{{{t_1} + {t_n}}}{2}$ is midpoint and $d = \frac{{{t_n} - {t_1}}}{2}$ is half of range of data. New independent variable lies in interval $[-1,1]$, which also helps avoid overflow or harmful underflow. Even with optimal shifting and scaling, monomial basis usually is still poorly conditioned, and we must seek better alternatives.\\\\\textsc{Evaluating Polynomials.}\\When represented in monomial basis, polynomial\[{p_{n - 1}}\left( t \right) = {x_1} + {x_2}t + \cdots + {x_n}{t^{n - 1}}\]can be evaluated efficiently using \textit{Horner's nested evaluation} scheme

Page 4: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

\[{p_{n - 1}}\left( t \right) = {x_1} + t\left( {{x_2} + t\left( {{x_3} + t\left( { \cdots \left( {{x_{n - 1}} + t{x_n}} \right) \cdots } \right)} \right)} \right)\]which requires only $n$ additions and $n$ multiplications.\\Other manipulations of interpolating polynomial, such as differentiation or integration, are also relatively easy with monomial basis representation.\subsection{Lagrange Interpolation}For given set of data points $(t_i,y_i),i=1,\hdots,n$, \textit{Lagrange basis function} are defined by\[{L_{n,i}}\left( t \right) = \prod\limits_{k = 1,k \ne i}^n {\frac{{t - {t_k}}}{{{t_i} - {t_k}}}} ,i = 1, \ldots ,n\]For Lagrange basis \[{L_{n,i}}\left( {{t_j}} \right) = \left\{ {\begin{array}{*{20}{l}}{1,\mbox{ if } j = i}\\{0,\mbox{ if } j \ne i}\end{array}} \right.,i,j = 1, \ldots ,n\]so matrix of linear system $A\mathbf{x}=\mathbf{y}$ is identity matrix. Thus, Lagrange polynomial interpolating data points $(t_i,y_i)$ is given by\[{p_{n - 1}}\left( t \right) = \sum\limits_{i = 1}^n {{y_i}{L_{n,i}}\left( t \right)} \]Lagrange interpolant is easy to determine but more expensive to evaluate for given argument, compared with monomial basis representation. Lagrange form is also more difficult to differentiate, integrate, etc.\subsection{Newton Interpolation}For given set of data points $(t_i,y_i),i=1,\hdots,n$, \textit{Newton basis functions} are defined by\[{\pi _j}\left( x \right) = \prod\limits_{k = 1}^{j - 1} {\left( {t - {t_k}} \right),j = 1, \ldots ,n} \]where value of product is taken to be $1$ when limits make it vacuous. \textit{Newton interpolating polynomial} has form\[{p_{n - 1}}\left( t \right) = {x_1} + {x_2}\left( {t - {t_1}} \right) + {x_3}\left( {t - {t_1}} \right)\left( {t - {t_2}} \right) + \cdots + {x_n}\left( {t - {t_1}} \right) \cdots \left( {t - {t_{n - 1}}} \right)\]For $i < j,{\pi _j}\left( {{t_i}} \right) = 0$, so basis matrix $A$ is lower triangular, where ${a_{ij}} = {\pi _j}\left( {{t_i}} \right)$.\\Solution $\mathbf{x}$ to system $A\mathbf{x}=\mathbf{y}$ can be computed by forward-substitution in $\mathcal{O}(n^2)$ arithmetic operations. Moreover, resulting interpolant can be evaluated efficiently for any argument by nested evaluation scheme similar to Horner's method. Newton interpolation has better balance between cost of computing interpolant and cost of evaluating it.\\If $p_j(t)$ is polynomial of degree $j-1$ interpolating $j$ given points, then for any constant $x_{j+1}$,\[{p_{j + 1}}\left( t \right) = {p_j}\left( t \right) + {x_{j + 1}}{\pi _{j + 1}}\left( t \right)\]is polynomial of degree $j$ that also interpolates same $j$ points. Free parameter $x_{j+1}$ can then be chosen so that $p_{j+1}(t)$ interpolates $y_{j+1}$\[{x_{j + 1}} = \frac{{{y_{j + 1}} - {p_j}\left( {{t_{j + 1}}} \right)}}{{{\pi _{j + 1}}\left( {{t_{j + 1}}} \right)}}\]Newton interpolation begins with constant polynomial $p_1(t)=y_1$ interpolating first data point and then successively incorporates each remaining data point into interpolant.\\Given data points $(t_i,y_i),i=1,\hdots,n$, \textit{divided differences}, denoted by $f[]$, are defined recursively by\[f\left[ {{t_1},{t_2}, \ldots ,{t_k}} \right] = \frac{{f\left[ {{t_2},{t_3}, \ldots ,{t_k}} \right] - f\left[ {{t_1},{t_2}, \ldots ,{t_{k - 1}}} \right]}}{{{t_k} - {t_1}}}\]\\

Page 5: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

where recursion begins with \[f\left[ {{t_k}} \right] = {y_k},k = 1, \ldots ,n\]Coefficients of $j$th basis function in Newton interpolant is given by\[{x_j} = f\left[ {{t_1},{t_2}, \cdots ,{t_j}} \right]\]Recursion requires $\mathcal{O}(n^2)$ arithmetic operations to compute coefficients of Newton interpolant, but is less prone to overflow or underflow than direct formation of triangular Newton basis matrix.\subsection{Orthogonal Polynomials}Inner product can be defined on space of polynomials on interval $[a,b]$ by taking\[\left\langle {p,q} \right\rangle = \int\limits_a^b {p\left( t \right)q\left( t \right)w\left( t \right)dt} \]where $w(t)$ is nonnegative \textit{weight function}. Two polynomials $p$ and $q$ are \textit{orthogonal} if $\left\langle {p,q} \right\rangle = 0$. Set of polynomials $\left\{ {{p_i}} \right\}$ is \textit{orthonormal} if\[\left\langle {{p_i},{p_j}} \right\rangle = \left\{ {\begin{array}{*{20}{l}}{1,\mbox{ if } i = j}\\{0,\mbox{ otherwise} }\end{array}} \right.\]Given set of polynomials, \textit{Gram-Schmidt orthogonalization} can be used to generate orthonormal set spanning same space.\\For example, with inner product given by weight $w\left( t \right) \equiv 1$ on interval $[-1,1]$, applying Gram-Schmidt process to set of monomials $1,t,t^2,t^3,\hdots$ yields \textit{Legendre polynomials}\[1,t,\frac{{3{t^2} - 1}}{2},\frac{{5{t^3} - 3t}}{2},\frac{{35{t^4} - 30{t^2} + 3}}{8},\frac{{63{t^5} - 70{t^3} + 15t}}{8}, \ldots \]first $n$ of which form an orthogonal basis for space of polynomials of degree at most $n-1$. Other choices of weight functions and intervals yield other orthogonal polynomials, such as Chebyshev, Jacobi, Laguerre and Hermite.\\\\\textsc{Orthogonal polynomials have many useful properties.}\begin{enumerate}\item Orthogonal polynomials satisfy three-term recurrence relation of form\[{p_{k + 1}}\left( t \right) = \left( {{\alpha _k}t + {\beta _k}} \right){p_k}\left( t \right) - {\gamma _k}{p_{k - 1}}\left( t \right)\]which makes them very efficient to generate and evaluate.\item Orthogonality makes them very natural for least squares approximation, and they are also useful for generating Gaussian quadrature rules, which we will see later.\end{enumerate}\subsection{Chebyshev Polynomials}$k$th \textit{Chebyshev polynomial} of first kind, defined on interval $[-1,1]$ by\[{T_k}\left( t \right) = \cos \left( {k\arccos t} \right)\]are orthogonal with respect to weight function $\frac{1}{{\sqrt {1 - {t^2}} }}$.\\\textsc{Equi-oscillation property.}\\Successive extrema of $T_k$ are equal in magnitude and alternate in sign, which distributes error uniformly when approximating arbitrary continuous function.\\\textbf{Chebyshev points} are zeros of $T_k$, given by\[{t_i} = \cos \frac{{\left( {2i - 1} \right)\pi }}{{2k}},i = 1, \ldots ,k\]or extrema of $T_k$, given by\[{t_i} = \cos \frac{{i\pi }}{k},i = 0,1, \ldots ,k\]Chebyshev points are abscissas of points equally spaced around unit circle in $\mathbb{R}^2$. Chebyshev points have attractive properties for interpolation and other problems.

Page 6: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

\subsection{Interpolating Continuous Functions}If data points are discrete sample of continuous function, how well does interpolant approximate that function between sample points? If $f$ is smooth function, and $p_{n-1}$ is polynomial of degree at most $n-1$ interpolating $f$ at $n$ points $t_1,\hdots,t_n$, then\[f\left( t \right) - {p_{n - 1}}\left( t \right) = \frac{{{f^{\left( n \right)}}\left( \theta \right)}}{{n!}}\left( {t - {t_1}} \right)\left( {t - {t_2}} \right) \cdots \left( {t - {t_n}} \right)\]where $\theta$ is some point in interval $[t_1,t_n]$. Since point $\theta$ is unknown, this result is not particularly useful unless bound on appropriate derivative of $f$ is known.\\If $\left| {{f^{\left( n \right)}}\left( t \right)} \right| \le M,\forall t \in \left[ {{t_1},{t_n}} \right]$, and $h = \max \left\{ {{t_{i + 1}} - {t_i}:i = 1, \ldots ,n - 1} \right\}$, then\[\mathop {\max }\limits_{t \in \left[ {{t_1},{t_n}} \right]} \left| {f\left( t \right) - {p_{n - 1}}\left( t \right)} \right| \le \frac{{M{h^n}}}{{4n}}\]Error diminishes with increasing $n$ and decreasing $h$, but only if $\left| {{f^{\left( n \right)}}\left( t \right)} \right|$ does not grow too rapidly with $n$.\subsection{High-Degree Polynomial Interpolation}\begin{enumerate}\item Interpolating polynomials of high degree are expensive to determine and evaluate.\item In some bases, coefficients of polynomial may be poorly determined due to ill-conditioning of linear system to be solved.\item High-degree polynomial necessarily has lots of \lq wiggles\rq\, which may bear no relation to data to be fit.\item Polynomial passes through required data points, but it may oscillate wildly between data points. \end{enumerate}\subsection{Convergence}\begin{enumerate}\item Polynomial interpolating continuous function may not converge to function as number of data points and polynomial degree increases.\item Equally spaced interpolation points often yield unsatisfactory results near ends of interval.\item If points are bunches near ends of interval, more satisfactory results are likely to be obtained with polynomial interpolation.\item Use of Chebyshev points distributes error evenly and yields convergence throughout interval for any sufficiently smooth function.\end{enumerate}\subsection{Taylor polynomial}Another useful form of polynomial interpolation for smooth function $f$ is polynomial given by truncated Taylor series\[{p_n}\left( t \right) = f\left( a \right) + f'\left( a \right)\left( {t - a} \right) + \frac{{f''\left( a \right)}}{2}{\left( {t - a} \right)^2} + \cdots + \frac{{{f^{\left( n \right)}}\left( a \right)}}{{n!}}{\left( {t - a} \right)^n}\]Polynomial interpolates $f$ in that values of $p_n$ and its first $n$ derivatives match those of $f$ and its first $n$ derivatives evaluated at $t=a$, so $p_n(t)$ is good approximation to $f(t)$ for $t$ near $a$.\subsection{Piecewise polynomial interpolation}\begin{enumerate}\item Fitting single polynomial to large number of data points is likely to yield unsatisfactory oscillating behavior in interpolant. \item Piecewise

Page 7: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

polynomials provide alternative to practical and theoretical difficulties with high-degree polynomial interpolation.\item Main advantage of piecewise polynomial interpolation is that large number of data points can be fit with low-degree polynomials.\item In piecewise interpolation of given data points $(t_i,y_i)$, \textit{different} function is used in each subinterval $[t_i,t_{i+1}]$.\item Abscissas $t_i$ are called \textit{knots} or \textit{breakpoints}, at which interpolant changes from one function to another.\item Simplest example is piecewise linear interpolation, in which successive pairs of data points are connected by straight lines. \item Although piecewise interpolation eliminates excessive oscillation and nonconvergence, it appears to sacrifice smoothness of interpolating function. \item We have many degrees of freedom in choosing piecewise polynomial interpolant, however, which can be exploited to obtain smooth interpolating function despite its piecewise nature.\end{enumerate}\subsection{Hermite interpolation}\begin{enumerate}\item In \textit{Hermite interpolation}, derivatives as well as values of interpolating function are taken into account.\item Including derivative values adds more equations to linear system that determines parameters of interpolating function.\item To have unique solution, number of equations must equal number of parameters to be determined.\item Piecewise cubic polynomials are typical choice for Hermite interpolation, providing flexibility, simplicity and efficiency.\end{enumerate}\subsection{Hermite Cubic interpolation}\begin{enumerate}\item \textit{Hermite cubic interpolant} is piecewise cubic polynomial interpolant with continuous first derivative.\item Piecewise cubic polynomial with $n$ knots has $4(n-1)$ parameters to be determined.\item Requiring that it interpolate given data gives $2(n-1)$ equations.\item Requiring that it have one continuous derivative gives $n-2$ additional equations, or total of $3n-4$, which still leaves $n$ free parameters.\item Thus, \textit{Hermite cubic interpolant} is not unique, and remaining free parameters can be chosen so that result satisfies additional constraints.\end{enumerate}\subsection{Cubic Spline Interpolation}\begin{enumerate}\item \textit{Spline} is piecewise polynomial of degree $k$ that is $k-1$ times continuously differentiable.\item For example, linear spline is of degree $1$ and has $0$ continuous derivatives, i.e., it is continuous, but not smooth, and could be described as \lq broken line\rq\\item \textit{Cubic spline} is piecewise cubic polynomial that is twice continuously differentiable.\item As with Hermite cubic, interpolating given data and requiring one continuous derivative imposes $3n-4$ constraints on cubic spline.\item Requiring continuous second derivative imposes $n-2$ additional constraints, leaving 2 remaining free parameters. Final two parameters can be fixed in various ways:\begin{itemize}\item Specify first derivative at endpoints $t_1$ and $t_n$.\item Force second derivative to be zero at endpoints, which gives \textit{natural spline}.

Page 8: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

\item Enforce \lq not-a-knot\rq\ condition, which forces two consecutive cubic pieces to be same.\item Force first derivatives, as well as second derivatives, to match at endpoints $t_1$ and $t_n$ (if spline is to be periodic).\end{itemize}\end{enumerate}\textsc{Hermite Cubic vs Spline Interpolation.}\begin{enumerate}\item Choice between Hermite cubic and spline interpolation depends on data to be fit and on purpose for doing interpolation.\item If smoothness is of paramount importance, then spline interpolation may be most appropriate.\item But Hermite cubic interpolant may have more pleasing visual appearance and allows flexibility to preserve monotonicity if original data are monotonic.\item In any cases, it is advisable to plot interpolant and data to help assess how well interpolating function captures behavior of original data.\end{enumerate}\subsection{B-splines}\textbf{B-splines} form basis for family of spline functions of given degree. B-spline can be defined in various ways, including recursion, convolution, and divided differences. Although in practice we use only finite set of knots $t_1,\hdots,t_n$, for notational convenience we will assume infinite set of knots\[ \cdots < {t_{ - 2}} < {t_{ - 1}} < {t_0} < {t_1} < {t_2} < \cdots \]Additional knots can be taken as arbitrarily defined points outside interval $[t_1,t_n]$.\\We will also use linear functions\[v_i^k\left( t \right) = \frac{{t - {t_i}}}{{{t_{i + k}} - {t_i}}}\]To start recursion, define B-splines of degree 0 by\[B_i^0\left( t \right) = \left\{ {\begin{array}{*{20}{l}}{1,\mbox{ if } {t_i} \le t < {t_{i + 1}}}\\{0,\mbox{ otherwise }}\end{array}} \right.\]and then for $k>0$ define B-splines of degree $k$ by\[B_i^k\left( t \right) = v_i^k\left( t \right)B_i^{k - 1}\left( t \right) + \left( {1 - v_{i + 1}^k\left( t \right)} \right)B_{i + 1}^{k - 1}\left( t \right)\]Since $B_i^0$ is piecewise constant and $v_i^k$ is linear, $B_i^1$ is piecewise linear. Similarly, $B_i^2$ is in turn piecewise quadratic, and in general, $B_i^k$ is piecewise polynomial of degree $k$.\\\\\textsc{Important properties of B-splines function $B_i^k$.}\begin{enumerate}\item For $t<t_i$ or $t>t_{i+k+1}, B_i^k(t)=0$\item For $t_i<t<t_{i+k+1},B_i^k(t)>0$\item For all $t$, $\sum\limits_{i = - \infty }^\infty {B_i^k\left( t \right)} = 1$\item For $k \ge 1$, $B_i^k$ has $k-1$ continuous derivatives.\item Set of functions $\left\{ {B_{1 - k}^k, \ldots ,B_{n - 1}^k} \right\}$ is linearly independent on interval $[t_1,t_n]$ and spans space of all splines of degree $k$ having knots $t_i$.\end{enumerate}Properties 1 and 2 together say that B-spline functions have local support. Property 3 gives normalization. Property 4 says that they are indeed splines. Property 5 says that for given $k$, these functions form basis for set of all splines of degree $k$.\\

Page 9: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

If we use B-splines basis, linear system to be solved for spline coefficients will be nonsingular and banded. Use of B-spline basis yields efficient and stable methods for determining and evaluating spline interpolants, and many library routines for spline interpolation are based on this approach. B-splines are also useful in many other contexts, such as numerical solution of differential equations.

\section{Definitions.}\textbf{Definition.} Assume $x_0,x_1,\hdots,x_n \in [a,b]$ are $n+1$ distinct numbers. Also, assume $m_0,m_1,\hdots,m_n$ are non-negative integers where each integer, $m_i$, corresponds with $x_i$. Further assume $f \in C^m[a,b]$ where $m = \max \left\{ {{m_i}:0 \le i \le n} \right\}$. The \textit{osculating polynomial} (or \textit{kissing interpolating polynomial}) that approximates $f$ is the polynomial $P(x)$ of least degree such that $\frac{{{d^k}P\left( {{x_i}} \right)}}{{d{x^k}}} = \frac{{{d^k}f\left( {{x_i}} \right)}}{{d{x^k}}}$ for each $i=0,1,\hdots,n$ and $k=0,1,\hdots,m_i$. That is:\begin{equation}\begin{array}{l}f\left( {{x_0}} \right) = P\left( {{x_0}} \right),f'\left( {{x_0}} \right) = P'\left( {{x_0}} \right), \ldots ,{f^{\left( {{m_0}} \right)}}\left( {{x_0}} \right) = {P^{\left( {{m_0}} \right)}}\left( {{x_0}} \right)\\f\left( {{x_1}} \right) = P\left( {{x_1}} \right),f'\left( {{x_1}} \right) = P'\left( {{x_1}} \right), \ldots ,{f^{\left( {{m_1}} \right)}}\left( {{x_1}} \right) = {P^{\left( {{m_1}} \right)}}\left( {{x_1}} \right)\\ \ldots \\f\left( {{x_n}} \right) = P\left( {{x_n}} \right),f'\left( {{x_n}} \right) = P'\left( {{x_n}} \right), \ldots ,{f^{\left( {{m_n}} \right)}}\left( {{x_n}} \right) = {P^{\left( {{m_n}} \right)}}\left( {{x_n}} \right)\end{array}\end{equation}\textsc{Note.} When there is a single point, $x_0$, the osculating polynomial approximating $f$ is the Taylor polynomial of $m_0$th degree.\\\textbf{Definition.} The osculating polynomial of $f$ formed when $m_0=m_1=\hdots=m_n=1$ is called the \textit{Hermite polynomial.}\\\textsc{Note.} The graph of the Hermite polynomial of $f$ agrees with $f$ at $n+1$ distinct points and has the same tangent lines as $f$ at those $n+1$ distinct points.\\\\\textbf{Theorem.} \textit{Assume $f \in C^1[a,b]$ and $x_0,x_1,\hdots,x_n \in [a,b]$ are distinct points. Then the unique polynomial of degree less than or equal to $2n+1$ is given by\begin{equation}

Page 10: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

{H_{2n + 1}}\left( x \right) = \sum\limits_{j = 0}^n {f\left( {{x_j}} \right){H_{n,j}}\left( x \right)} + \sum\limits_{j = 0}^n {f'\left( {{x_j}} \right){{\hat H}_{n,j}}\left( x \right)} \end{equation}where\[{H_{n,j}}\left( x \right) = \left[ {1 - 2\left( {x - {x_j}} \right)L{'_{n,j}}\left( {{x_j}} \right)} \right]L_{n,j}^2\left( x \right)\]and \[{\hat H_{n,j}}\left( x \right) = \left( {x - {x_j}} \right)L_{n,j}^2\left( x \right)\]$L_{n,k}$ is the Lagrange coefficient polynomial\[{L_{n,k}} = \frac{{\left( {x - {x_0}} \right) \cdots \left( {x - {x_{k - 1}}} \right)\left( {x - {x_{k + 1}}} \right) \cdots \left( {x - {x_n}} \right)}}{{\left( {{x_k} - {x_0}} \right) \cdots \left( {{x_k} - {x_{k - 1}}} \right)\left( {{x_k} - {x_{k + 1}}} \right) \cdots \left( {{x_k} - {x_n}} \right)}}\]}\subsection{Attack 1}Add to the original interpolation conditions $P(x_j)=y_j, j=0,1, \hdots,n$, the derivative conditions\begin{equation}\begin{array}{l}P'\left( {{x_j}} \right) = {y_j}',j = 0,1, \ldots ,n \mbox{ OR: }\\{a_1} + 2{a_2}{x_j} + \cdots + 2n{a_{2n}}x_j^{2n - 1} + \left( {2n + 1} \right){a_{2n + 1}}x_j^{2n} = {y_j}'\end{array}\end{equation}Now for each node $x_j$ , we need two numbers $y_j$ and ${y_j}'$. This increase of input data allows us more influence on the interpolators shape, but requires that we about double its degree from $(n+1)-1$ to $(2n+2)-1$. For each $j$ the two linear equations might correspond to two rows\[\left[ {\begin{array}{*{20}{c}}1&{{x_j}}&{x_j^2}&{x_j^3}& \cdots &{x_j^{2n + 1}}\\0&1&{2{x_j}}&{3x_j^2}& \cdots &{\left( {2n + 1} \right)x_j^{2n}}\end{array}} \right]\]of a \textit{generalized Vandermonde system,} $Va=y$, where $a = {\left( {{a_0},{a_1}, \ldots ,{a_{2n + 1}}} \right)^t}$, and $y = {\left( {{y_0},{y_0}',{y_1},{y_1}', \ldots ,{y_n},{y_n}'} \right)^t}$. $V$ is nonsingular iff the $x_j$ are distinct, and thus there is a unique polynomial of degree no more than $2n+1$ which interpolates the data $(x_j,y_j),(x_j,y_j')$, it is called the \textit{Hermite interpolating polynomial.} There are explicit formulas for this polynomial in various bases, but they are simply different representations for the polynomial P above, whose coefficients are $a = V^{-1}y$.\\The Lagrange form for the Hermite polynomial takes a very nice form for theoretical work. Define\[{H_{n,j}}\left( x \right) = \left[ {1 - 2\left( {x - {x_j}} \right)L{'_{n,j}}\left( {{x_j}} \right)} \right]L_{n,j}^2\left( x \right)\]and \[{\hat H_{n,j}}\left( x \right) = \left( {x - {x_j}} \right)L_{n,j}^2\left( x \right)\]where $L_{n,j}(x)$ are the standard Lagrange basis polynomials. \\Check out this handy set of nodal properties:\[\begin{array}{l}{H_{n,i}}\left( {{x_j}} \right) = {\delta _{ij}},{{\hat H}_{n,i}}\left( {{x_j}} \right) = 0\\{H_{n,i}}'\left( {{x_j}} \right) = 0,{{\hat H}_{n,i}}'\left( {{x_j}} \right) = {\delta _{ij}}

Page 11: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

\end{array}\]It is easy to verify that the \lq \textit{Lagrange form}\rq of this Hermite interpolator is\begin{equation}P\left( x \right) = \sum\limits_{j = 0}^n {\left[ {{y_j}{H_{n,j}}\left( x \right) + {y_j}'{{\hat H}_{n,j}}\left( x \right)} \right]} \end{equation}If the data are associated with a smooth enough function, then we have an error formula: If ${y_j} = f\left( {{x_j}} \right),{y_j}' = f'\left( {{x_j}} \right)$ and $[a,b]$ contains the nodes, then $\exists \xi \in \left[ {a,b} \right]$ with\begin{equation}f\left( x \right) = P\left( x \right) + \frac{{{f^{\left( {2n + 2} \right)}}\left( \xi \right)}}{{\left( {2n + 2} \right)!}}\prod\limits_{j = 0}^n {{{\left( {x - {x_j}} \right)}^2}} \end{equation}\subsection{Attack 2}Given $\left\{ {{x_i}} \right\},i = 1, \ldots ,k$ and values $a_i^{\left( 0 \right)}, \ldots ,a_i^{\left( {{r_i}} \right)}$ where $r_i$ are nonnegative integers. We want to construct a polynomial $P(t)$ such that\begin{equation}{P^{\left( j \right)}}\left( {{x_i}} \right) = a_i^{\left( j \right)}\end{equation}for $i=1,\hdots,k$ and $j=0,\hdots,r_i$.\\Such a polynomial is called an \textit{osculatory (kissing) interpolating polynomial} of a function $f$ if $a_i^{\left( j \right)} = {f^{\left( j \right)}}\left( {{x_i}} \right),\forall i,j$.\\\textsc{Special cases.} \begin{enumerate}\item Suppose $r_i=0$ for all $i$. Then this is simply the ordinary Lagrange or Newton interpolation.\item Suppose $k=1,x_1=a,r_1=n-1$, then the osculatory polynomial becomes \[P\left( t \right) = \sum\limits_{i = 0}^{n - 1} {{f^{\left( i \right)}}\left( a \right)\frac{{{{\left( {t - a} \right)}^i}}}{{i!}}} \]which is the Taylor's polynomial of $f$ at $x=a$.\item One of the most interesting osculatory interpolations is when $r_i=1$ for all $i=1,\hdots,k$. That is, the values of $f(x_i)$ and $f'(x_i)$ are to be interpolated. The resulting $(2k-1)$-degree polynomial is called the \textit{Hermite interpolating polynomial.}\end{enumerate}\subsection{Attack 3 (Reverse)}Hermite interpolation interpolates function values and function derivatives at the interpolation points. Let the interpolation points be $x_i,i=0,1,2,\hdots,n$. Let the Hermite interpolation polynomial be\begin{equation}{p_{2n + 1}}\left( x \right) = \sum\limits_{i = 0}^n {\left( {{H_i}\left( x \right)f\left( {{x_i}} \right) + {K_i}\left( x \right)f'\left( {{x_i}} \right)} \right)} \end{equation}where both $H_i(x)$ and $K_i(x)$ are polynomial of degree $2n+1$. Further, to satisfy the interpolation conditions, we need\[{H_i}\left( {{x_j}} \right) = {\delta _{ij}},{H_i}'\left( {{x_j}} \right) = 0,{K_i}\left( {{x_j}} \right) = 0,{K_i}'\left( {{x_j}} \right) = {\delta _{ij}}\]We have\[{l_i}\left( x \right) = \frac{{{\omega _{n + 1}}\left( x \right)}}{{\left( {x - {x_i}} \right){\omega _{n + 1}}'\left( {{x_i}} \right)}}\]

Page 12: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

is a polynomial of degree $n$ and ${l_i}\left( {{x_j}} \right) = {\delta _{ij}}$. Let us choose \[{H_i}\left( x \right) = {r_i}\left( x \right)l_i^2\left( x \right),{K_i}\left( x \right) = {s_i}\left( x \right)l_i^2\left( x \right)\]Then to satisfy the condition, we need to impose\[\begin{array}{l}{r_i}\left( {{x_i}} \right) = 1,{r_i}'\left( {{x_i}} \right) + 2{l_i}'\left( {{x_i}} \right) = 0\\{s_i}\left( {{x_i}} \right) = 0,{s_i}'\left( {{x_i}} \right) = 1\end{array}\]Hence ${r_i}\left( x \right) = 1 - 2{l_i}'\left( {{x_i}} \right)\left( {x - {x_i}} \right),{s_i}\left( x \right) = x - {x_i}$. Hence\begin{equation}\begin{array}{l}{H_i}\left( x \right) = \left( {1 - 2{l_i}'\left( {{x_i}} \right)\left( {x - {x_i}} \right)} \right)l_i^2\left( x \right)\\{K_i}\left( x \right) = \left( {x - {x_i}} \right)l_i^2\left( x \right)\end{array}\end{equation}To prove that this polynomial is the unique polynomial of degree less than or equal to $2n+1$, let there exists another polynomial $q_{2n+1}(x)$. Then $d\left( x \right) = {p_{2n + 1}}\left( x \right) - {q_{2n + 1}}\left( x \right)$ is a polynomial of degree less than or equal to $2n+1$. Since $d(x_i)=0$ for $i=0,1,2,\hdots,n$, it follows from the Rolle's theorem that $d'(x)$ has $n$ zeros that lie in the intervals $\left( {{x_{i - 1}},{x_i}} \right)$ for $i=1,2,\hdots,n$. Further, since $d'(x_i)=0$, it is clear that $d'(x)$ has additional $n+1$ zeros. Hence $d'(x)$ has $2n+1$ distinct zeros. But $d'(x)$ is a polynomial of degree less than or equal to $2n$. Hence $d\left( x \right) \equiv 0$.\\To find the error formula, let $\overline x$ be a point different from $x_i$'s and let\[f\left( {\overline x } \right) - {p_{2n + 1}}\left( {\overline x } \right) = c\left( {\overline x } \right)\omega _{n + 1}^2\left( {\overline x } \right)\]Now consider the function\[\phi \left( t \right) = f\left( t \right) - {p_{2n + 1}}\left( t \right) - c\left( {\overline x } \right)\omega _{n + 1}^2\left( t \right)\]Then \[\begin{array}{l}\phi \left( {{x_i}} \right) = 0,i = 0,1,2, \ldots ,n\\\phi \left( {\overline x } \right) = 0\end{array}\]Let $a = \mathop {\min }\limits_{0 \le i \le n} \left\{ {\overline x ,{x_i}} \right\},b = \mathop {\max }\limits_{0 \le i \le n} \left\{ {\overline x ,{x_i}} \right\}$. Now by Rolle's theorem, $\phi '\left( t \right)$ vanishes $(n+1)$ times in the interior of the intervals formed by $x_i$'s and $\overline{x}$. Further $\phi '\left( t \right)$ vanishes at the $(n+1)$ points $x_i$. Clearly, $\phi '\left( t \right)$ has $(2n+2)$ distinct zeros in $[a,b]$. By Rolle's theorem ${\phi ^{\left( {2n + 2} \right)}}\left( \xi \right) = 0$ for $\xi \in (a,b)$. This implies \[{f^{\left( {n + 2} \right)}}\left( \xi \right) - c\left( {\overline x } \right)\left( {2n + 2} \right)! = 0\]Hence\begin{equation}f\left( {\overline x } \right) - {p_{2n + 1}}\left( {\overline x } \right) = \frac{{{f^{\left( {n + 2} \right)}}\left( \xi \right)}}{{\left( {2n + 2} \right)!}}\omega _{n + 1}^2\left( {\overline x } \right)\end{equation}

Page 13: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

\subsection{Attack 4}Suppose that the interpolation points are perturbed so that two neighboring points $x_i$ and $x_{i+1}$, $0 \le i < n$, approach each other. In the limit, as $x_{i+1} \to x_{i}$, the interpolating polynomial $p_n(x)$ not only satisfies $p_n(x_i)=y_i$, but also the condition\[{p_n}'\left( {{x_i}} \right) = \mathop {\lim }\limits_{{x_{i + 1}} \to {x_i}} \frac{{{y_{i + 1}} - {y_i}}}{{{x_{i + 1}} - {x_i}}}\]If follows that in order to ensure uniqueness, the data must specify the value of the derivative of the interpolating polynomial at $x_i$. In general, the inclusion of an interpolation point $x_i$ $k$ times within the set $x_0,x_1,\hdots,x_n$ must be accompanied by specification $p_n^{\left( j \right)}\left( {{x_i}} \right),j = 0, \ldots ,k - 1$, in order to ensure a unique solution. These values are used in place of divided differences of identical interpolation points in Newton interpolation. Interpolation with repeated interpolation points is called osculatory interpolation, since it can be viewed as the limit of distinct interpolation points approaching one another, and the term \lq osculatory\rq is based on the Latin word for \lq kiss\rq.\\In the case where each of the interpolation points $x_0,x_1,\hdots,x_n$ is repeated exactly once, the interpolating polynomial for a differentiable function $f(x)$ is called the \textit{Hermite polynomial} of $f(x)$ and is denoted by $H_{2n+1}(x)$, since this polynomial must have degree $2n+1$ in order to satisfy the $2n+2$ constraints\[{H_{2n + 1}}\left( {{x_i}} \right) = f\left( {{x_i}} \right),{H_{2n + 1}}'\left( {{x_i}} \right) = f'\left( {{x_i}} \right),i = 0,1, \ldots ,n\]To satisfy these constraints, we define, for $i=0,1,...,n$\[\begin{array}{l}{H_i}\left( x \right) = {\left[ {{L_i}\left( x \right)} \right]^2}\left( {1 - 2{L_i}'\left( {{x_i}} \right)\left( {x - {x_i}} \right)} \right)\\{K_i}\left( x \right) = {\left[ {{L_i}\left( x \right)} \right]^2}\left( {x - {x_i}} \right)\end{array}\]where, as before, $L_i(x)$ is the $i$th Lagrange polynomial for the interpolating points $x_0,x_1,\hdots,x_n$. It can be verified directly that these polynomials satisfy, for $i,j=0,1,\hdots,n$,\[\begin{array}{l}{H_i}\left( {{x_j}} \right) = {\delta _{ij}},{H_i}'\left( {{x_j}} \right) = 0\\{K_i}\left( {{x_j}} \right) = 0,{K_i}'\left( {{x_j}} \right) = {\delta _{ij}}\end{array}\]It follows that\[{H_{2n + 1}}\left( x \right) = \sum\limits_{i = 0}^n {\left[ {f\left( {{x_i}} \right){H_i}\left( x \right) + f'\left( {{x_i}} \right){K_i}\left( x \right)} \right]} \]is a polynomial of degree $2n+1$ that satisfies the above constraints.\\To prove that this polynomial is the \textit{unique} polynomial of degree $2n+1$, we assume that there is another polynomial ${\tilde H_{2n + 1}}$ of degree $2n+1$ that satisfies the constraints. Because ${H_{2n + 1}}\left( {{x_i}} \right) = {\tilde H_{2n + 1}}\left( {{x_i}} \right) = f\left( {{x_i}} \right),i = 0,1, \ldots ,n$, ${H_{2n + 1}} - {\tilde H_{2n + 1}}$ has at least $n+1$ zeros. It follows from Rolle's theorem that ${H_{2n + 1}}' - {\tilde H_{2n + 1}}'$ has $n$ zeros that lie within the intervals $(x_{i-1},x_i)$ for $i=0,1,\hdots,n-1$. Furthermore, because ${H_{2n + 1}}'\left( {{x_i}} \right) - {\tilde H_{2n + 1}}'\left( {{x_i}} \right) = f'\left( {{x_i}} \right),i = 0,1, \ldots ,n$, it follows that ${H_{2n + 1}} - {\tilde H_{2n + 1}}$ has $n+1$ additional zeros, for a total of at least $2n+1$. However, ${H_{2n + 1}}' - {\tilde H_{2n + 1}}'$ is a polynomial of degree

Page 14: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

$2n+1$, so it is identically zero. Therefore, ${H_{2n + 1}} = {\tilde H_{2n + 1}}$, and the Hermite polynomial is unique.\\\\\textbf{Theorem.} \textit{Let $f$ be $2n+2$ times continuously differentiable on $[a,b]$, and let $H_{2n+1}$ denote the Hermite polynomial of $f$ with interpolation points $x_0,x_1,\hdots,x_n$ in $[a,b]$. Then there exists a point $\xi \left( x \right) \in \left[ {a,b} \right]$ such that \[f\left( x \right) - {H_{2n + 1}}\left( x \right) = \frac{{{f^{\left( {2n + 2} \right)}}\left( {\xi \left( x \right)} \right)}}{{\left( {2n + 2} \right)!}}{\left( {x - {x_0}} \right)^2} \cdots {\left( {x - {x_n}} \right)^2}\] }\subsection{Attack 5}The Matlab function \texttt{vander} generates Vandermonde matrices.\[\left( {\begin{array}{*{20}{c}}{x_1^{n - 1}}&{x_1^{n - 2}}& \cdots &{{x_1}}&1\\{x_2^{n - 1}}&{x_2^{n - 2}}& \cdots &{{x_2}}&1\\ \vdots & \vdots & \ddots & \vdots & \vdots \\{x_n^{n - 1}}&{x_n^{n - 2}}& \cdots &{{x_n}}&1\end{array}} \right)\]Some Matlab function that implement various interpolation algorithms. All of them have the calling sequence\begin{verbatim}v=interp(x,y,u)\end{verbatim}The first two input arguments, $x$ and $y$m are vectors of the same length that define the interpolating points. The third argument, $u$, is a vector of points where the function is to be evaluated. The output $v$ is the same length as $u$ and has elements\begin{verbatim}v(k)=interp(x,y,u(k))\end{verbatim}Our first such interpolation function, \verb|polyinterp| is based on the Lagrange form. The code uses Matlab array operations to evaluate the polynomial at all the components of $u$ simultaneously.\begin{verbatim}function v = polyinterp(x,y,u)n = length(x);v = zeros(size(u));for k = 1:n w = ones(size(u)); for j = [1:k-1 k+1:n] w = (u-x(j))./(x(k)-x(j)).*w; end v = v + w*y(k);end\end{verbatim}The \verb|polyinterp| function also works correctly with symbolic variables. For example, create:\begin{verbatim}symx=sym('x')\end{verbatim}Then evaluate and display the symbolic form of the interpolating polynomial with\begin{verbatim}P = polyinterp(x,y,symx)pretty(P)P=simplify(P)

Page 15: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

\end{verbatim}

\section{Piecewise Linear Interpolation}You can create a simple picture of the data set by plotting the data twice, once with circles at the data points and once with straight lines connecting the points. To generates the lines, the Matlab graphics routines use \textit{piecewise linear} interpolation. The algorithm sets the stage for more sophisticated algorithms. Three quantities are involved. The \textit{interval index $k$} must be determined so that\[{x_k} \le x < {x_{k + 1}}\]The \textit{local variable, $s$}, is given by\[s = x - {x_k}\]The \textit{first divided difference} is \[{\delta _k} = \frac{{{y_{k + 1}} - {y_k}}}{{{x_{k + 1}} - {x_k}}}\]With these quantities in hand, the interpolant is\[L\left( x \right) = {y_k} + \left( {x - {x_k}} \right)\frac{{{y_{k + 1}} - {y_k}}}{{{x_{k + 1}} - {x_k}}} = y_k + s\delta_k \] This is clearly a linear function that passes through $(x_k,y_k)$ and $(x_{k+1},y_{k+1})$. The points $x_k$ are sometimes called \textit{breakpoints} or \textit{breaks}. The piecewise linear interpolant $L(x)$ is a continuous function of $x$, but its first derivative, $L'(x)$, is not continuous. The derivative has a constant value, $\delta_k$, on each subinterval and jumps at the breakpoints. \begin{verbatim}function v = piecelin(x,y,u)% PIECELIN Piecewise linear interpolation.% v = piecelin(x,y,u) finds the piecewise linear L(x)% with L(x(j)) = y(j) and returns v(k) = L(u(k)).% First divided difference delta = diff(y)./diff(x);% Find subinterval indices k so that x(k) <= u < x(k+1) n = length(x); k = ones(size(u)); for j = 2:n-1 k(x(j) <= u) = j; end% Evaluate interpolant s = u - x(k); v = y(k) + s.*delta(k);\end{verbatim}\section{Piecewise Cubic Hermite Interpolation}Many of the most effective interpolation techniques are based on piecewise cubic polynomials. Let $h_k$ denote the length of the $k$th subinterval:\[{h_k} = {x_{k + 1}} - {x_k}\]The the first divided difference, $\delta_k$, is given by\[{\delta _k} = \frac{{{y_{k + 1}} - {y_k}}}{{{h_k}}}\]Let $d_k$ denote the slope of the interpolant at $x_k$:\[{d_k} = P'\left( {{x_k}} \right)\]For the piecewise linear interpolant, $d_k=\delta_{k-1}$ or $\delta_k$, but this is not necessarily true for higher order interpolants.\\

Page 16: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

Consider the following function on the interval $x_k \le x \le x_{k+1}$, expressed in terms of local variables $s=x-x_k$ and $h=x_k$:\[P\left( x \right) = \frac{{3h{s^2} - 2{s^3}}}{{{h^3}}}{y_{k + 1}} + \frac{{{h^3} - 3h{s^2} + 2{s^3}}}{{{h^3}}}{y_k} + \frac{{{s^2}\left( {s - h} \right)}}{{{h^2}}}{d_{k + 1}} + \frac{{s{{\left( {s - h} \right)}^2}}}{{{h^2}}}{d_k}\]This is a cubic polynomial in $s$, and hence in $x$, that satisfies four interpolation condition, two on function values and two on the possibly unknown derivative values:\[P\left( {{x_k}} \right) = {y_k},P\left( {{x_{k + 1}}} \right) = {y_{k + 1}},P'\left( {{x_k}} \right) = {d_k},P'\left( {{x_{k + 1}}} \right) = {d_{k + 1}}\]Functions that satisfy interpolation conditions on derivatives are known as \textit{Hermite} or \textit{osculatory} interpolants, because of the higher order contact at the interpolation sites.

\section{Differentiation}\subsection{Finite difference approximations}Recall that the derivative of $f(x)$ at a point $x_0$, denoted $f'(x_0)$, is defined by\[f'\left( {{x_0}} \right) = \mathop {\lim }\limits_{h \to 0} \frac{{f\left( {{x_0} + h} \right) - f\left( {{x_0}} \right)}}{h}\]This definition suggests a method for approximating $f'(x_0)$. If we choose $h$ to be a small positive constant, then\[f'\left( {{x_0}} \right) \approx \frac{{f\left( {{x_0} + h} \right) - f\left( {{x_0}} \right)}}{h}\]This approximation is called the \textit{forward difference formula}. To estimate the accuracy of this approximation, we note that if $f''(x)$ exists on $[x_0,x_0+h]$, then, by Taylor's theorem, \[f\left( {{x_0} + h} \right) = f\left( {{x_0}} \right) + f'\left( {{x_0}} \right)h + f''\left( \xi \right)\frac{{{h^2}}}{2}\]where $\xi \in [x_0,x_0+h]$. Solving for $f'(x_0)$, we obtain\[f'\left( {{x_0}} \right) = \frac{{f\left( {{x_0} + h} \right) - f\left( {{x_0}} \right)}}{h} + f''\left( \xi \right)\frac{h}{2}\]so the error in the \textit{forward difference formula} is $O(h)$. Say that this formula is \textit{first-order accurate.} The forward-difference formula is called a \textit{finite difference approximation} to $f'(x_0)$, because it approximates $f'(x)$ using values of $f(X)$ at points that have a small, but finite, distance between them, as opposed to the definition of the derivative, that take a limit and therefor computes the derivative using an \lq infinitely

Page 17: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

small\rq value of $h$. The forward-difference formula, however, is just one example of a finite difference approximation. If we replace $h$ by $-h$ in the forward-difference formula, where $h$ is still positive, we obtain the \textit{backward-difference formula}\[f'\left( {{x_0}} \right) = \frac{{f\left( {{x_0}} \right) - f\left( {{x_0 -h}} \right)}}{h}\]Like the forward-difference formula, the backward difference formula is first-order accurate. If we average these two approximations, we obtain the \textit{centered-difference formula}\[f'\left( {{x_0}} \right) = \frac{{f\left( {{x_0} + h} \right) - f\left( {{x_0} - h} \right)}}{{2h}}\]To determine the accuracy of this approximation, we assume that $f'''(x)$ exists on the interval $[x_0-h,x_0+h]$, and then apply Taylor's theorem again to obtain\[\begin{array}{l}f\left( {{x_0} + h} \right) = f\left( {{x_0}} \right) + f'\left( {{x_0}} \right)h + \frac{{f''\left( {{x_0}} \right)}}{2}{h^2} + \frac{{f'''\left( {{\xi _ + }} \right)}}{6}{h^3}\\f\left( {{x_0} - h} \right) = f\left( {{x_0}} \right) - f'\left( {{x_0}} \right)h + \frac{{f''\left( {{x_0}} \right)}}{2}{h^2} - \frac{{f'''\left( {{\xi _ - }} \right)}}{6}{h^3}\end{array}\]where ${\xi _ + } \in \left[ {{x_0},{x_0} + h} \right],{\xi _ - } \in \left[ {{x_0} - h,{x_0}} \right]$. Subtracting the second equation from the first and solving for $f'(x_0)$ yields\[f'\left( {{x_0}} \right) = \frac{{f\left( {{x_0} + h} \right) - f\left( {{x_0} - h} \right)}}{{2h}} - \frac{{f'''\left( {{\xi _ + }} \right) + f'''\left( {{\xi _ - }} \right)}}{{12}}{h^2}\]By the Intermediate Value Theorem, $f'''(x)$ must assume every value between ${f'''\left( {{\xi _ - }} \right)}$ and ${f'''\left( {{\xi _ + }} \right)}$ on the interval $\left( {{\xi _ - },{\xi _ + }} \right)$, including the average of these two values. Therefore, we can simplify this equation to\[f'\left( {{x_0}} \right) = \frac{{f\left( {{x_0} + h} \right) - f\left( {{x_0} - h} \right)}}{{2h}} - \frac{{f'''\left( \xi \right)}}{6}{h^2}\]where $\xi \in [x_0-h,x_0+h]$. We conclude that the centered-difference formula is \textit{second-order accurate.} This is due to the cancellation of the terms involving $f''(x_0)$.\\While Taylor's Theorem can be used to derive formulas with higher-order accuracy simply by evaluating $f(x)$ at more points, this process can be tedious. An alternative approach is to computethe derivative of the interpolating polynomial that fits $f(x)$ at these points. Specifically, suppose we want to compute the derivative at a point $x_0$ using the data\[\left( {{x_{ - j}},{y_{ - j}}} \right), \ldots ,\left( {{x_{ - 1}},{y_{ - 1}}} \right),\left( {{x_0},{y_0}} \right),\left( {{x_1},{y_1}} \right), \ldots ,\left( {{x_k},{y_k}} \right)\]where $j$ and $k$ are known nonnegative integers, ${x_{ - j}} < {x_{ - j + 1}} < \cdots < {x_{k - 1}} < {x_k}$, and $y_i=f(x_i)$ for $i=-j,\hdots,k$. Then, a finite difference formula for $f'(X_0)$ can be obtained by analytically computing the derivatives of the Lagrange polynomials $\left\{ {{L_{n,i}}\left( x \right)} \right\}_{i = - j}^k$ for these points, where $n=j+k+1$, and the values of these derivatives at $x_0$ are the proper weights for the function value $y_{-j},\hdots,y_k$. If $f(x)$ is $n+1$ times continuously differentiable on $[x_{-j},x_k]$ then we obtain an approximation of the form\[f'\left( {{x_0}} \right) = \sum\limits_{i = - j}^k {{y_i}{L_{n,i}}'\left( {{x_0}} \right) + \frac{{{f^{\left( {n + 1} \right)}}\left( \xi \

Page 18: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

right)}}{{\left( {n + 1} \right)!}}} \prod\limits_{i = - j,i \ne 0}^k {\left( {{x_0} - {x_i}} \right)} \]where $\xi \in [x_{-j},x_k]$.\\Among the best-known finite difference formulas that can be derived using this approach is the \textit{second-order-accurate three-point formula}\[f'\left( {{x_0}} \right) = \frac{{ - 3f\left( {{x_0}} \right) + 4f\left( {{x_0} + h} \right) - f\left( {{x_0} + 2h} \right)}}{{2h}} + \frac{{f'''\left( \xi \right)}}{3}{h^2},\xi \in \left[ {{x_0},{x_0} + 2h} \right]\]which is useful when there is no information available about $f(x)$ for $x < x_0$. If there is no information available about $f(x)$ for $x > x_0$, then we can replace $h$ by $-h$ in the above formula toobtain a \textit{second-order-accurate three-point formula} that uses the values of $f(x)$ at $x_0$, $x_0 - h$ and $x_0 -2h$.\\Another formula í the five-point formula\[\begin{array}{l}f'\left( {{x_0}} \right) = \frac{{f\left( {{x_0} - 2h} \right) - 8f\left( {{x_0} - h} \right) + 8f\left( {{x_0} + h} \right) - f\left( {{x_0} + 2h} \right)}}{{12h}} + \frac{{{f^{\left( 5 \right)}}\left( \xi \right)}}{{30}}{h^4}\\\mbox{for }\xi \in \left[ {{x_0} - 2h,{x_0} + 2h} \right]\end{array}\]which is fourth-order accurate. The reason it is called a five-point formula, even though it uses the value of $f(x)$ at four points, is that it is derived form the Lagrange polynomials for the five points $x_0-2h,x_0-h,x_0,x_0+h,x_0+2h$. However, $f(x_0)$ is not used in the formula because ${L_{4,0}}'\left( {{x_0}} \right) = 0$ where $L_{4,0}(x)$ is the Lagrange polynomial that is equal to one at $x_0$ and zero at the other four points. \\If we do not have any information about $f(x)$ for $x<x_0$, then we can use the following five-point formula that actually uses the values of $f(x)$ at five points\[\begin{array}{l}f'\left( {{x_0}} \right) = \frac{{ - 25f\left( {{x_0}} \right) + 48f\left( {{x_0} + h} \right) - 36f\left( {{x_0} + 2h} \right) + 16f\left( {{x_0} + 3h} \right) - 3f\left( {{x_0} + 4h} \right)}}{{12h}} + \frac{{{f^{\left( 5 \right)}}\left( \xi \right)}}{5}{h^4}\\\mbox{for }\xi \in \left[ {{x_0},{x_0} + 4h} \right]\end{array}\]As before, we can replace $h$ by $-h$ to obtain a similar formula that approximates $f'(x_0)$ using the values of $f(x)$ at $x_0,x_0-h,x_0-2h,x_0-3h,x_0-4h$.\\The strategy of differentiating Lagrange polynomials to approximate derivatives can be used to approximate higher-order derivatives. For example, the second derivative can be approximated using a centered difference formula\[f''\left( {{x_0}} \right) \approx \frac{{f\left( {{x_0} + h} \right) - 2f\left( {{x_0}} \right) + f\left( {{x_0} - h} \right)}}{{{h^2}}}\]which is second-order accurate.

\section{The Interpolating polynomial}\textit{Interpolating} is the process of defining a function that \lq connects the dots\rq\ between specified (data) points. \subsection{Newton Interpolation}The Newton interpolating polynomial takes the form\begin{equation}

Page 19: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

{P_n}\left( x \right) = {c_1} + {c_2}\left( {x - {x_1}} \right) + {c_3}\left( {x - {x_1}} \right)\left( {x - {x_2}} \right) + \cdots + {c_n}\prod\limits_{k = 1}^{n - 1} {\left( {x - {x_k}} \right)}\end{equation}Once the coefficients $c_k,k=1,2,\hdots,n$, have been computed, the Newton polynomial can be efficiently evaluated using Horner's method. Note also that Newton interpolation can be done \textit{incrementally}, i.e., the interpolant can be created \textit{as points are being added}. So we fit a straight line to two points, then add a point and fit a quadratic to three points, then add a point and fit a cubic to four points, etc.\\\textsc{Remark.}\begin{enumerate}\item The coefficients $c_k$ can be obtained \textit{recursively} in $\mathcal{O}(n^2)$ operations using \textit{divided differences}. These computations are less prone to overflow and underflow than the previous methods.\item In theory, any order of the interpolation points $x_k$ is OK, but the conditioning depends on this ordering. Left-to-right ordering is not necessarily the best. Two better ideas are to order the points in increasing distance from their mean or from a specified point at which the interpolant will be evaluated.\end{enumerate}\subsection{Parametric Interpolation}None of the techniques described so far can be used to generate curves like the letter \lq S\rq. That's because the letter \lq S\rq is not a function (a vertical line intersects \lq S\rq more than once). One way to get around this problem is to describe the curve in terms of a \textit{parameter $t$}.\\We connect the points $(x_0,y_0),(x_1,y_1),\hdots,(x_n,y_n)$ \textit{in that order} by using a parameter $t \in [t_0,t_n]$ with $t_0<t_1<\hdots<t_n$ such that\[{x_k} = {x_k}\left( {{t_k}} \right),{y_k} = {y_k}\left( {{t_k}} \right),k = 0,1, \ldots ,n\]We could construct a pair of Lagrange polynomials to interpolate $x(t)$ and $y(t)$.\subsection{Piecewise Linear Interpolation}This is perhaps the most intuitive form of interpolation. Piecewise linear interpolation is simply connecting data points by straight lines. \textit{Linear interpolation} means to use straight-line interpolants. We say it is \textit{piecewise} interpolation because you normally need different straight lines to connect different pairs of points. If you data points are sufficiently close together, you don't notice the jaggedness associated with piecewise linear interpolation. Here is an outline of the algorithm used to produce a piecewise linear interpolant. The steps of this algorithm are used as a basis for more sophisticated piece polynomial interpolants (or \textit{splines}).\begin{description}\item[Step 1.] The \textit{interval index} $k$ is determined such that\[{x_k} \le x \le {x_{k + 1}}\]\item[Step 2.] Define a \textit{local variable} $s:=x-x_k$.\item[Step 3.] Compute the first \textit{divided difference} \[{\delta _k}: = \frac{{{y_{k + 1}} - {y_k}}}{{{x_{k + 1}} - {x_k}}}\]\item[Step 4.] Construct the interpolant\[P\left( x \right) = {y_k} + \frac{{{y_{k + 1}} - {y_k}}}{{{x_{k + 1}} - {x_k}}}\left( {x - {x_k}} \right) = {y_k} + {\delta _k}s\]\end{description}This is the Newton form of the linear interpolating polynomial. It can be generalized to higher-order divided differences; i.e., divided differences of

Page 20: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

divided differences. So we have constructed the straight line that passes through $(x_k,y_k)$ and $(x_{k+1},y_{k+1})$. The points $x_k$ are sometimes called \textit{breakpoints.}\\\textsc{Note.} $P(x)$ is a continuous function of $x$, but its first derivative $P'(x)$ is not. $P'(x)=\delta_k$ on each subinterval and jumps at the breakpoints.\subsection{Piecewise Cubic Hermite Interpolation}Many of the most effective interpolants are based on piecewise cubic polynomials. Let $h_k:=x_{k+1}-x_k$ be the length of the $k$th subinterval. Then \[{\delta _k} = \frac{{{y_{k + 1}} - {y_k}}}{{{h_k}}}\]Let $d_k:=P'(x_k)$.\\\textsc{Note.} If $P(x)$ is piecewise linear, then $d_k$ is not really defined because $d_k=\delta_{k-1}$ on the left of $x_k$, but $d_k=\delta_k$ on the right of $x_k$. Usually $\delta_{k_1} \ne \delta_k$.\\But for higher-order interpolants (like cubics), it is possible to force the interpolant to be \textit{smooth} at the breakpoints. This is done by forcing the derivative at the right end of the next piecewise cubic. A cubic polynomial has 4 degrees of freedom (i.e. the 4 coefficients $c_1,c_2,c_3,c_4$). We can specify 4 pieces of information, and as long as the $4 \times 4$ system of linear equations is non-singular, we can obtain unique values for the 4 unknowns $c_i$ and hence uniquely specify the cubic polynomial.\\What we will do now (in order to enforce smoothness) is to specify function values and slopes (first derivatives) at the end-points of each subinterval to define the piecewise cubic polynomial.\\Consider the following cubic polynomial on the interval $x_k \le x \le x_{k+1}$ expressed in terms of local variables $s=x-x_k$ and $h=h_k$:\[P\left( x \right) = \frac{{3h{s^2} - 2{s^3}}}{{{h^3}}}{y_{k + 1}} + \frac{{{h^3} - 3h{s^2} + 2{s^3}}}{{{h^3}}}{y_k} + \frac{{{s^2}\left( {s - h} \right)}}{{{h^2}}}{d_{k + 1}} + \frac{{s{{\left( {s - h} \right)}^2}}}{{{h^2}}}{d_k}\]This is a cubic polynomial in $s$ (and hence in $x$) that satisfies 4 interpolation conditions: 2 on function values and 2 on (possibly unknown) derivative values.\begin{equation}\begin{array}{l}P\left( {{x_k}} \right) = {y_k},P\left( {{x_{k + 1}}} \right) = {y_{k + 1}}\\P'\left( {{x_k}} \right) = {d_k},P'\left( {{x_{k + 1}}} \right) = {d_{k + 1}}\end{array}\end{equation}Interpolants for derivatives are known as \textit{Hermite} osculatory interpolants because of the higher-order contact at the breakpoints. (\textit{Osculari} is Latin for \lq to kiss\rq.) If we know both function values and first derivatives at a set of points, then a piecewise cubic Hermite interpolant can be fit to those data. But if we are not given the derivative values, we need to define the slopes $d_k$ somehow. Two possible ways to do this called \texttt{pchip} and \texttt{spline} in Matlab. \subsection{Shape-Preserving Cubic Spline (\texttt{pchip})}\texttt{pchip} stands for \textit{piecewise cubic Hermite interpolating polynomial.} Unfortunately, the catchy name does not precisely specify how the interpolant is defined - there are many ways to have a \textit{piecewise cubic Hermite interpolating polynomial.} The key features of the \texttt{pchip} interpolant in Matlab is that it is \textit{shap preserving} and \textit{visually pleasing}. The key idea is to determine the slopes $d_k$ so that the interpolant does not oscillate too much.\subsection{Cubic Spline}

Page 21: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

The term \textit{spline} originates from the name of an instrument used in drafting. A real spline is a thin, flexible wooden or plastic instrument that is passed through given data points; it is used to define a smooth curve in between the points. Physically, the spline takes its shape by naturally minimizing its own potential energy subject to it passing through the data points. Mathematically, the spline must have a continuous second derivative (curvature) and pass through the data points. The breakpoints of a spline are also referred to as \textit{knots} or \textit{nodes.}\\\textsc{Note.} Splines extend far beyond the one-dimensional, cubic, interpolatory spline we are studying.\\There are multidimensional, high-order, variable-knot and approximating splines. The first derivative $P'(x)$ of our piecewise cubic function is defined by different formulas on either side of a knot $x_k$. However, both formulas are designed to give the same value $d_k$ at $x_k$, so $P'(x)$ is continuous. We have no such guarantee for the second derivative; however, we choose continuity of the second derivative to be a defining condition for the cubic spline. Applying the preceding approach to each interior knot $x_k, k=2,3,\hdots,n-1$, gives $n-2$ equations involving the $n$ unknowns $d_k$.\\A different approach is necessary near the ends of the interval. One effective strategy is known as \textit{not-a-knot.} The idea is to use a single cubic on the first two subintervals ($x_1 \le x \le x_3$) and on the last two subintervals ($x_{n-2} \le x \le x_n$). The idea is if there is only one cubic on each of the first and last pairs of intervals, it is as if $x_2$ and $x_{n-1}$ are not there - they are not treated like other knots, e.g., on the first pair of intervals, we can pretend there are two different cubic polynomials $P_1(x),P_2(x)$ and impose the condition\[{P_1}'''\left( {{x_2} + } \right) = {P_2}'''\left( {{x_2} - } \right)\]Together with the continuity of the cubic spline and its first two derivatives, this forces \[{P_1}\left( x \right) \equiv {P_2}\left( x \right)\]With the two end conditions, we now have $n$ linear equations in $n$ unknowns:\[{\mathbf{Ad = r}} \]where $\mathbf{d} = {\left( {{d_1},{d_2}, \ldots ,{d_n}} \right)^T}$ is the vector of slopes. The slopes can be now be computed by\[\mathbf{d = A/r}\]Because most of the elements of $\mathbf{A}$ are zero, it is appropriate to store $A$ in a \textit{sparse} data structure. The \verb|\| operator can then take advantage of the tridiagonal structure and solve the linear equations in time and storage proportional to $n$, the number of data points. \subsection{B\'{e}zier Curves}Applications in computer graphics and computer-aided design (CAD) require the rapid generation of smooth curves that can be quickly and conveniently modified. For reasons of aesthetics and computational expense, we do not want the entire shape of the curve to be affected by small local changes. This rules out interpolating polynomials or splines!\\In 1962, B\'{e}zier and de Casteljau independently developed a method for doing this while working on CAD systems for the French car companies Renault and Citro\"{e}n, respectively. Other studies were carriedout by Ferguson (Boeing) and Coons (Ford). However, the Renault software was described in several publications by B\'{e}zier, and hence his name has become associated with this theory.\subsection{Bernstein polynomial}To understand B\'{e}zier curves, we start with the Bernstein polynomials of degree $n$ on the interval $[0,1]$:\begin{equation}B_i^{\left( n \right)}\left( x \right) = \left( {\begin{array}{*{20}{c}}n\\

Page 22: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

i\end{array}} \right){\left( {1 - x} \right)^{n - i}}{x^i}\end{equation}In computer graphics, the most popular B\'{e}zier curve is cubic ($n=3$). This leads to the polynomials\[B_0^{\left( 3 \right)}\left( x \right) = {\left( {1 - x} \right)^3},B_1^{\left( 3 \right)}\left( x \right) = 3{\left( {1 - x} \right)^2}x,B_2^{\left( 3 \right)}\left( x \right) = 3\left( {1 - x} \right){x^2},B_3^{\left( 3 \right)}\left( x \right) = {x^3}\]\subsection{Construction of B\'{e}zier curves}Four points $c_0,c_1,c_2,c_3$ (in 2 or 3 dimensions) now define the cubic B\'{e}zier curve\begin{equation}\mathbf{B}\left( x \right) = \sum\limits_{i = 0}^3 {{c_i}B_i^{\left( 3 \right)}\left( x \right)} \end{equation}The points $c_i$ are known as \textit{control points}. $\mathbf{B}(x)$ starts at $c_0$ going toward $c_1$ and arrives at $c_3$ coming from $c_2$. $\mathbf{B}(x)$ only interpolates $c_0$ and $c_3$; i.e., it does not generally pass through $c_1$ or $c_2$ - these points only provide directional information. In fact, $\mathbf{B}(x)$ is tangent to the line connecting $c_0$ and $c_1$ (an to the line connecting $c_2$ and $c_3$), but it is \lq more tangent\rq\ the further $c_1$ is away from $c_0$ (and the further $c_2$ is away from $c_3$).\subsection{Summary of observation}When interpolating data, there is a trade-off between smoothness of the interpolant and a somewhat subjective property that we might call \textit{local monotonicity} or \lq shape preservation\rq. At one extreme, we have the piecewise linear interpolant: It has hardly any smoothness. It is continuous, but there are jumps in its first derivative. On the other hand, it preserves the local monotonicity of the data. It never overshoots the data, and it is increasing, decreasing, or constant on the same intervals as the data. At the other extreme, we have the full-degree polynomial interpolant. It is infinitely differentiable. But it often fails to preserve shape, particularly near the ends of the interval.\\The \texttt{pchip} and \texttt{spline} interpolants are in between these two extremes. The \texttt{spline} is smoother than \texttt{pchip}. The \texttt{spline} has 2 continuous derivatives, whereas \texttt{pchip} has only 1. A discontinuous second derivative implies discontinuous curvature. The human eye can detect large jumps in curvature! This might be a factor in choosing \texttt{spline} over \texttt{pchip}. On the other hand, \texttt{pchip} is guaranteed to preserve shape, whereas \texttt{spline} might not. The best spline is often in the eye of the beholder. \section{Hermite interpolation}\subsection{Hermite interpolation}\textsc{Hermite interpolation task.}\\\textit{For a given function $f \in {C^m}\left[ {a,b} \right]$ and given points $\left\{ {{x_i},i = 0, \ldots ,m} \right\}$ find a polynomial $p \in \mathcal{P}^n$ such that for given points\[{p^{\left( j \right)}}\left( {{x_i}} \right) = {f^{\left( j \right)}}\left( {{x_i}} \right),j = 0, \ldots ,{l_i};i = 0, \ldots ,m;n + 1 = \sum\limits_{i = 0}^m {\left( {{l_i} + 1} \right)} \]}\subsection{Hermite interpolation. Solvability and uniqueness}\textbf{Theorem.} \textit{The Hermite interpolation task has a unique solution, provided that the $x_i$ are distinct.}\\\\

Page 23: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

The proof makes use of the fact that the functions $x^j,j=0,\hdots,n$ form a basis of $\mathcal{P}$. Note furthermore that $c\prod\limits_{i = 0}^m {{{\left( {x - {x_i}} \right)}^{{l_i}}}} $ solves the interpolation task but is a polynomial of degree $n+1$ unless $c=0$.\subsection{Divided differences: Extention to the Hermite case}We allow multiplicity of arguments corresponding to multiple input data at the same interpolation point $x_i$:\[F\left[ {\underbrace {{x_0}, \ldots ,{x_0}}_{\left( {{l_0} + 1} \right) - times},{x_1}, \ldots ,{x_{i - 1}},\underbrace {{x_i}, \ldots ,{x_i}}_{\left( {{l_i} + 1} \right) - times},{x_{i + 1}}, \ldots ,{x_n}} \right]\]In the recurrence relation division by zero occurs. Consequently the definition of the divided differences is in this case slightly altered:\[F\left[ {\underbrace {{x_i}, \ldots ,{x_i}}_{\left( {{l_i} + 1} \right) - times}} \right]: = \frac{1}{{{l_i}!}}{f^{\left( {{l_i}} \right)}}\left( {{x_i}} \right)\]\\\textbf{Theorem. Weierstrass.}\\\textit{Let $f \in C[a,b]$. For any $\epsilon>0$ exists an $n \in \mathbb{N}$ and a $p \in \mathcal{P}^n$ such that\[{\left\| {f - p} \right\|_\infty } \le \varepsilon \]}\\\textsc{Note.} $p$ needs not to be an interpolation polynomial. Making the interpolation grid denser and denser will not necessarily lead to the desired approximating polynomial.\subsection{Monotone operators}\textbf{Definition.} An operator $L: C[a,b] \to C[a,b]$ is called monotone if for all $f,g \in \mathcal{B}$\[f\left( x \right) \ge g\left( x \right) \Rightarrow \left( {Lf} \right)\left( x \right) \ge \left( {Lg} \right)\left( x \right),\forall x \in \left[ {a,b} \right]\]Evidently, if $L$ is linear, monotonicity is equivalent with\[f\left( x \right) \ge 0 \Rightarrow \left( {Lf} \right)\left( x \right) \ge 0,\forall x \in \left[ {a,b} \right]\]\\\textbf{Theorem. Monotone Operator Theorem.}\\Let $L-i: C[a,b] \to C[a,b]$ be a sequence of linear monotone operators. Then, \[\mathop {\lim }\limits_{i \to \infty } {L_i}p = p,\forall p \in {\mathcal{P}^2} \Rightarrow \mathop {\lim }\limits_{i \to \infty } {L_i}f = f,\forall f \in C\left[ {a,b} \right]\]\subsection{Bernstein approximation operator}We consider for a while the interval $[a,b]=[0,1]$.\\\textbf{Definition.} The linear operator $B_n:C[0,1] \to \mathcal{P}^n$ with \[\left( {{B_n}f} \right)\left( x \right) = \sum\limits_{k = 0}^n {\left( {\begin{array}{*{20}{c}}n\\k\end{array}} \right){x^k}{{\left( {1 - x} \right)}^{\left( {n - k} \right)}}f\left( {\frac{k}{n}} \right)} \]is called \textit{Berstein operator} of degree $n$.\\\textsc{Note.} $B_n$ is not a projection.\\\\\textbf{Theorem.} \[\mathop {\lim }\limits_{n \to \infty } {B_n}f = f,\forall f \in C\left[ {0,1} \right]\]\subsection{Derivative of Bernstein approximation}The Bernstein operator gives even uniform convergence for the derivative:\\\\\textbf{Theorem.} \textit{Let $f \in C^1[0,1]$, then

Page 24: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

\[\mathop {\lim }\limits_{n \to \infty } {\left\| {f' - \left( {{B_n}f} \right)'} \right\|_\infty } = 0\]}

\section{Formulations}The interpolating function $f$ is expressed as a linear combination of basis functions $\phi_j(x)$,\begin{equation}\label{formulation1}f\left( x \right) = \sum\limits_{j = 1}^n {{c_j}{\phi _j}} \left( x \right)\end{equation}where $c_i$ are coefficients to be determined. The requirement of the interpolating points are:\\\\\textsc{Algorithm. Interpolation.}\begin{verbatim}f=inline('log(x)','x');x0=linspace(1/2,3/2,N+1);x=linspace(1/2,3/2,1001);% Getting coefficients PN=Newton(f,x0); PL=Lagrange(f,x0); PH=Hermite(f,x0);% Evaluation E=errorN(f,PN,x0,x); E=errorL(f,PL,x0,x); E=errorH(f,PH,x0,x);\end{verbatim}\textsc{Algorithm. Newton.}\begin{verbatim}function g=Newton(f,x)g=feval(f,x);n=length(x);for j=1:(n-1) for i=n:-1:(j+1) g(i)=(g(i)-g(i-1))/(x(i)-x(i-j)); endend\end{verbatim}\textsc{Algorithm. Evaluation of Newton interpolation and error.}\begin{verbatim}function e=errorN(f,g,x0,x)e=feval(f,x);n=length(x0);p=zeros(1,length(x))lfor j=n:-1:2 p=(x-x0(j-1)).*(g(j)+p);endp=p+g(1);e=abs(e-p);\end{verbatim}\begin{equation}f\left( {{x_i}} \right) = \sum\limits_{j = 1}^n {{c_j}{\phi _j}\left( {{x_i}} \right)} = {y_i}\end{equation}Which is a system of linear equations that can be solved in several ways.\subsection{Newton interpolation}The basis functions in Newton interpolation is in the form

Page 25: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

\begin{equation}{\phi _j}\left( x \right) = \prod\limits_{k = 1}^j {\left( {x - {x_k}} \right)} \end{equation}Using this basis function and base on (\ref{formulation1}) the Newton interpolant can be rewritten as follows:\begin{equation}p_N^{n - 1}\left( x \right) = {\gamma _1} + \left( {x - {x_1}} \right)\left( {{\gamma _2} + \left( {x - {x_2}} \right)\left( {{\gamma _3} + \cdots \left( {{\gamma _{n - 1}} + {\gamma _n}\left( {{x_n} - {x_{n - 1}}} \right)} \right)} \right)} \right)\end{equation}where the coefficients $\gamma_k$ can be found based on the definition of divided differences:\begin{equation}{\gamma _k} = f\left[ {{x_1},{x_2}, \ldots ,{x_k}} \right] = \frac{{f\left[ {{x_2}, \ldots ,{x_k}} \right] - f\left[ {{x_1}, \ldots ,{x_{k - 1}}} \right]}}{{{x_k} - {x_1}}}\end{equation}\[f\left[ {{x_k}} \right] = f\left( {{x_k}} \right)\]\subsection{Lagrange interpolation}The following expression describe the Lagrange interpolant:\begin{equation}p_L^{n - 1}\left( x \right) = \sum\limits_{j = 1}^n {f\left( {{x_j}} \right)} \prod\limits_{k = 1,k \ne j}^n {\frac{{x - {x_k}}}{{{x_j} - {x_k}}}}\end{equation}where the coefficients are the simple evaluation of the function in the interpolating points. The implementation of the method is presented in Algorithm Lagrange, in which we can note a simple way to found the coefficient but a more difficult way to evaluate the interpolant.\\\\\textsc{Algorithm. Lagrange.}\begin{verbatim}function g=Lagrange(f,x) g=feval(f,x);\end{verbatim}\textsc{Algorithm. Evaluation of Lagrange interpolation and error.}\begin{verbatim}function e=errorL(f,g,x0,x)e=feval(f,x);n=length(x0);p=zeros(1,length(x))lfor j=1:n q=g(j)*ones(1,length(x)); for i=1:n if i~=j q=q.*(x-x0(i))/(x0(j)-x0(i)); end end p=p+q;ende=abs(e-p);\end{verbatim}\subsection{Hermite interpolation}This method uses the value of the function on the interpolant points and its derivatives. The expression can be defined as follows:\begin{equation}

Page 26: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

p_H^{2n - 1}\left( x \right) = {\gamma _1} + {\gamma _2}\left( {x - {x_1}} \right) + {\gamma _3}{\left( {x - {x_1}} \right)^2} + {\gamma _4}{\left( {x - {x_1}} \right)^2}\left( {x - {x_2}} \right) + \cdots \end{equation}The evaluation of the coefficients is similar to Newton method but taking in consideration the derivatives. The evaluation of the interpolant is similar to the Newton method.\\\\\textsc{Algorithm. Hermite.}\begin{verbatim}function g=Hermite(f,x)fd=inline('<function>','x');gd=feval(fd,x);g1=feval(f,x);g(1)=g1(1);for j=1:(n-1) for i=n:-1:(j+1) if j==1 if mod(i,2)==0 g(i)=gd(i/2); else g(i)=(g1((i+1)/2)-g1((i-1)/2))/(x(i)-x(i-j)); end else g(i)=(g(i)-g(i-1))/(x(i)-x(i-j)); end endend\end{verbatim}\textsc{Algorithm. Evaluation of Hermite interpolation and error.}\begin{verbatim}function e=errorH(f,g,x0,x)n=length(x0);for i=1:n x1(2*i-1)=x0(i); x1(2*i)=x0(i);endx0=x1;e=feval(f,x);n=length(x0);p=zeros(1,length(x));for j=n:-1:2 p=(x-x0(j-1)).*(g(j)+p);endp=p+g(1);e=abs(e-p);\end{verbatim}\section{Interpolation error and convergence}\subsection{Interpolation error}\textbf{Theorem.} \textit{Let $f \in C[a,b]$ and $x_0,x_1,\hdots,x_n$, be $n+1$ distinct points in $[a,b]$. Then there exists a unique polynomial $p_n$ of degree at most $n$ such that $p_n(x_i)=f(x_i), i=0,1,\hdots,n$.}\\\\\textbf{Theorem.} \textit{Let $f \in C^{n+1}[a,b]$ differentiable on $(a,b)$ and let $x_0,x_1,\hdots,x_n$ be $n+1$ distinct points in $[a,b]$. If $p_n(x)$ is such that $p_n(x_i)=f(x_i),i=0,1,\hdots,n$, then for each $x \in [a,b]$ there exists $\xi(x) \in [a,b]$ such that

Page 27: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

\begin{equation}f\left( x \right) - {p_n}\left( x \right) = \frac{{{f^{\left( {n + 1} \right)}}\left( {\xi \left( x \right)} \right)}}{{\left( {n + 1} \right)!}}W\left( x \right)\end{equation}where \[W\left( x \right) = \prod\limits_{i = 0}^n {\left( {x - {x_i}} \right)} \] }\\\textsc{Proof.} Let $x \in [a,b]$ and $x \ne x_i, i=0,1,\hdots,n$ and define the function\[g\left( t \right) = f\left( t \right) - {p_n}\left( t \right) - \frac{{f\left( x \right) - {p_n}\left( x \right)}}{{W\left( x \right)}}W\left( t \right)\]We note that $g$ has $n+2$ roots, i.e., $g(x_i)=0,i=0,1,\hdots,n$ and $g(x)=0$. Using the generalized Rolle's Theorem there exists $\xi(x) \in (a,b)$ such that\[{g^{\left( {n + 1} \right)}}\left( {\zeta \left( x \right)} \right) = 0\]which leads to\begin{equation}\label{1.1.1}{g^{\left( {n + 1} \right)}}\left( {\xi \left( x \right)} \right) = {f^{\left( {n + 1} \right)}}\left( {\xi \left( x \right)} \right) - 0 - \frac{{f\left( x \right) - {p_n}\left( x \right)}}{{W\left( x \right)}}\left( {n + 1} \right)! = 0\end{equation}Solve (\ref{1.1.1}) to find \[f\left( x \right) - {p_n}\left( x \right) = \frac{{{f^{\left( {n + 1} \right)}}\left( {\xi \left( x \right)} \right)}}{{\left( {n + 1} \right)!}}W\left( x \right)\]which completes the proof. $\square$\\\\\textbf{Corollary.} \textit{Assume that $\mathop {\max }\limits_{x \in \left[ {a,b} \right]} \left| {{f^{\left( {n + 1} \right)}}\left( x \right)} \right| = {M_{n + 1}}$ then\begin{equation}\left| {f\left( x \right) - {p_n}\left( x \right)} \right| \le \frac{{{M_{n + 1}}}}{{\left( {n + 1} \right)!}}\left| {W\left( x \right)} \right|,\forall x \in \left[ {a,b} \right]\end{equation}and\begin{equation}\mathop {\max }\limits_{x \in \left[ {a,b} \right]} \left| {f\left( x \right) - {p_n}\left( x \right)} \right| \le \frac{{{M_{n + 1}}}}{{\left( {n + 1} \right)!}}\mathop {\max }\limits_{x \in \left[ {a,b} \right]} \left| {W\left( x \right)} \right|\end{equation}}\subsection{Convergence}We will study the uniform convergence of interpolation polynomials on a fixed interval $[a,b]$ as the number of interpolation points approaches infinity. Let $h = \frac{{b - a}}{n},{x_i} = a + ih,i = 0,1, \ldots ,n$, equidistant interpolation points. Let $p_n(x)$ denote the Lagrange interpolation polynomial, and let us study the limit\[\mathop {\lim }\limits_{n \to \infty } {\left\| {f - {p_n}} \right\|_\infty } = \mathop {\lim }\limits_{n \to \infty } \mathop {\max }\limits_{x \in \left[ {a,b} \right]} \left| {f\left( x \right) - {p_n}\left( x \right)} \right|\]

Page 28: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

For $x \in \left[ {a,b} \right],\left| {x - {x_i}} \right| \le b - a$ which leads to $\left| {W\left( x \right)} \right| \le {\left( {b - a} \right)^{n + 1}}$. Thus, the interpolation error is bounded as\begin{equation}{\left\| {f - {p_n}} \right\|_\infty } \le \frac{{{M_{n + 1}}}}{{\left( {n + 1} \right)!}}{\left( {b - a} \right)^{n + 1}}\end{equation}We have uniform convergence when $\frac{{{M_{n + 1}}}}{{\left( {n + 1} \right)!}}{\left( {b - a} \right)^{n + 1}} \to 0$ as $n \to \infty$.\\\\\textbf{Theorem.} \textit{Let $f$ be an analytic function on a disk centered at $\frac{{a + b}}{2}$ with a radius $r > \frac{{3\left( {b - a} \right)}}{2}$. Then, the interpolation polynomial $p_n(x)$ satisfying $p_n(x_i)=f(x_i), i=0,1,\hdots,n$, converges to $f$ as $n \to \infty$, i.e.}\begin{equation}\mathop {\lim }\limits_{n \to \infty } {\left\| {f - {p_n}} \right\|_\infty } = 0\end{equation}\\\textsc{Proof.} A function is analytic at $\frac{{a + b}}{2}$ if it admits a power series expansion that converges on a disk of radius $r$ and centered at $\frac{{a + b}}{2}$. Applying Cauchy's formula:\[{f^{\left( k \right)}}\left( x \right) = \frac{{k!}}{{2\pi i}}\oint_{{C_r}} {\frac{{f\left( z \right)}}{{{{\left( {z - x} \right)}^{k + 1}}}}dz,x \in \left[ {a,b} \right]} \]\[\left| {{f^{\left( k \right)}}\left( x \right)} \right| = \frac{{k!}}{{2\pi i}}\oint_{{C_r}} {\frac{{\left| {f\left( z \right)} \right|}}{{{{\left| {\left( {z - x} \right)} \right|}^{k + 1}}}}dz,x \in \left[ {a,b} \right]} \]Let $z$ be a point on the circle $C_r$ and $x \in [a,b]$. From the triangle with vertices $z,\frac{{a + b}}{2}$ and $x$ the following triangular inequality holds\[\left| {z - x} \right| + d \ge r\]Noting that $d \le \frac{{b - a}}{2}$ the triangle inequality yields\[\left| {z - x} \right| \ge r - \frac{{b - a}}{2}\]\[\left| {{f^{\left( k \right)}}\left( x \right)} \right| \le \frac{{k!}}{{2\pi }}\frac{{\mathop {\max }\limits_{z \in {C_r}} \left| {f\left( z \right)} \right|}}{{{{\left| {r - \frac{{b - a}}{2}} \right|}^{\left( {k + 1} \right)}}}}2\pi r\]Assume $r > \frac{{b - a}}{2}$ ( $[a,b] \in C_r$) to obtain\[{M_k} \le \frac{r}{{r - \frac{{b - a}}{2}}}\mathop {\max }\limits_{z \in {C_r}} \left| {f\left( z \right)} \right|\frac{{k!}}{{{{\left( {r - \frac{{b - a}}{2}} \right)}^k}}}\]Using $k=n+1$ the interpolation error may be bounded as\[\begin{array}{l}\left| {f\left( x \right) - {p_n}\left( x \right)} \right| \le \frac{{{M_{n + 1}}}}{{\left( {n + 1} \right)!}}{\left( {b - a} \right)^{n + 1}} \le \\ \le \mathop {\max }\limits_{z \in {C_r}} \left| {f\left( z \right)} \right|\left( {\frac{{r\left( {b - a} \right)}}{{r - \frac{{b - a}}{2}}}} \right){\left( {\frac{{b - a}}{{r - \frac{{b - a}}{2}}}} \right)^n}\end{array}\]Finally, we have uniform convergence if $\frac{{b - a}}{{r - \frac{{b - a}}{2}}} < 1$, i.e., $r > \frac{3}{2}\left( {b - a} \right)$, which establishes the theorem. $\square$\subsection{Interpolaton errors and divided differences}The Newton form of $p_n(x)$ that interpolates $f$ at $x_i,i=0,1,\hdots,n$ is \[{p_n}\left( x \right) = f\left[ {{x_0}} \right] + f\left[ {{x_0},{x_1}} \right]\left( {x - {x_0}} \right) + \sum\limits_{i = 2}^n {f\left[ {{x_0}, \

Page 29: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

ldots ,{x_i}} \right]\prod\limits_{j = 0}^{i - 1} {\left( {x - {x_j}} \right)} } \]We proof a theorem relating the interpolation errors and divided differences.\\\\\textbf{Therem.} \textit{If $f \in C[a,b]$ and $n+1$ times differentiable, then for every $x \in [a,b]$\begin{equation}f\left( x \right) - {p_n}\left( x \right) = f\left[ {{x_0},{x_1}, \ldots ,{x_n},x} \right]\prod\limits_{i = 0}^n {\left( {x - x{}_i} \right)}\end{equation}and\begin{equation}f\left[ {{x_0},{x_1}, \ldots ,{x_k}} \right] = \frac{{{f^{\left( k \right)}}\left( \xi \right)}}{{k!}},\xi \in \left[ {\mathop {\min }\limits_{i = 0, \ldots ,k} {x_i},\mathop {\max }\limits_{i = 0, \ldots ,k} {x_i}} \right]\end{equation}}\\\textsc{Hint.} Let us introduce another point $x$ distinct from $x_i,i=0,1,\hdots,n$ and let $p_{n+1}$ interpolate $f$ at $x_0,x_1,\hdots,x_n$ and $x$, thus\[{p_{n + 1}}\left( x \right) = {p_n}\left( x \right) + f\left[ {{x_0},{x_1}, \ldots ,{x_n},x} \right]\prod\limits_{i = 0}^n {\left( {x - {x_i}} \right)} \]Combining the equation $p_{n+1}(x)=f(x)$ and the interpolation error formula we write\[f\left( x \right) - {p_n}\left( x \right) = f\left[ {{x_0},{x_1}, \ldots ,{x_n},x} \right]\prod\limits_{i = 0}^n {\left( {x - {x_i}} \right)} = \frac{{{f^{\left( {n + 1} \right)}}\left( {\xi \left( x \right)} \right)}}{{\left( {n + 1} \right)!}}\prod\limits_{i = 0}^n {\left( {x - {x_i}} \right)} \]This leads to\[f\left[ {{x_0},{x_1}, \ldots ,{x_k}} \right] = \frac{{{f^{\left( k \right)}}\left( \xi \right)}}{{k!}},\xi \in \left[ {\mathop {\min }\limits_{i = 0, \ldots ,k} {x_i},\mathop {\max }\limits_{i = 0, \ldots ,k} {x_i}} \right]\]which completes the proof. $\square$\\\\\textsc{Remark.} \begin{equation}\mathop {\lim }\limits_{{x_i} \to {x_0},i = 1, \ldots ,k} f\left[ {{x_0},{x_1}, \ldots ,{x_k}} \right] = \frac{{{f^{\left( k \right)}}\left( {{x_0}} \right)}}{{k!}}\end{equation}\section{Interpolation at Chebyshev points}Uniform convergence does not occur using uniform interpolation points for some functions. Now, we study the interpolation error on $[-1,1]$ where the $n+1$ interpolation points, $x_i^*, i=0,1,\hdots,n$, in $[-1,1]$ are selected such that\[{\left\| {{W^*}\left( . \right)} \right\|_\infty } = \mathop {\min }\limits_{Q \in {{\mathcal{\widetilde P}}_{n + 1}}} {\left\| {Q\left( . \right)} \right\|_\infty }\]where $\mathcal{\widetilde{P}}_n$ is the set of monic polynomials\[{\mathcal{\widetilde P}_n} = \left[ {Q \in {\mathcal{P}_n}|Q = {x^n} + \sum\limits_{i = 1}^{n - 1} {{c_i}{x^i}} } \right]\]and\[{W^*}\left( x \right) = \prod\limits_i^n {\left( {x - x_i^*} \right)} \]

Page 30: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

\textsc{Question.} Are there interpolation points $x_i^*,i=0,1,2,\hdots,n$ in $[-1,1]$ such that\[{W^*}\left( x \right) = \mathop {\min }\limits_{{x_i} \in \left[ {a,b} \right],i = 0,1, \ldots ,n} {\left\| W \right\|_\infty }\]If the above statement is true, the interpolation error can be bounded by\[{\left\| {{E_n}} \right\|_\infty } \le \frac{{{M_{n + 1}}}}{{\left( {n + 1} \right)!}}{\left\| {{W^*}} \right\|_\infty }\]\textsc{The Answer.} The best interpolation points $x_i^*,i=0,1,2,\hdots,n$ are the roots of the Chebyshev polynomial $T_{n+1}(x)$ defined as\[{T_k}\left( x \right) = \cos \left( {k\arccos x} \right),k = 0,1,2, \ldots \]Some properties of Chebyshev polynomials:\\\\\textbf{Theorem.} \textit{The Chebyshev polynomials $T_k(x),k=0,1,2,\hdots$ satisfy the following properties:\\(i) $\left| {{T_k}\left( x \right)} \right| \le 1,\forall - 1 \le x \le 1$.\\(ii) ${T_{k + 1}}\left( x \right) = 2x{T_k}\left( x \right) - {T_{k - 1}}\left( x \right)$.\\(iii) $T_k(x)$ has $k$ roots $x_j^* = \cos \frac{{\left( {2j + 1} \right)\pi }}{{2k}} \in \left[ { - 1,1} \right],j = 0,1, \ldots ,k - 1$.\\(iv) ${T_k}\left( x \right) = {2^{k - 1}}\prod\limits_{j = 0}^{k - 1} {\left( {x - x_j^*} \right)} $.\\(v) If ${\widetilde T_k}\left( x \right) = \frac{{{T_k}\left( x \right)}}{{{2^{k - 1}}}}$ then $\mathop {\max }\limits_{x \in \left[ { - 1,1} \right]} \left| {{{\widetilde T}_k}\left( x \right)} \right| = \frac{1}{{{2^{k - 1}}}}$.}\\\\\textsc{Proof.}\\\textbf{(ii).} Write \[{T_{k + 1}}\left( x \right) = \cos \left( {k\arccos x + \arccos x} \right) = \cos \left( {k\theta + \theta } \right)\]where $\theta = \arccos x$ and write\[{T_{k + 1}}\left( x \right) = \cos \left( {k\theta + \theta } \right),{T_{k - 1}}\left( x \right) = \cos \left( {k\theta - \theta } \right)\]Have\[\begin{array}{l}\cos \left( {k\theta + \theta } \right) = \cos \left( {k\theta } \right)\cos \theta - \sin \left( {k\theta } \right)\sin \theta \\os\left( {k\theta - \theta } \right) = \cos \left( {k\theta } \right)\cos \theta + \sin \left( {k\theta } \right)\sin \theta \end{array}\]Adding the previous equations to obtain\[{T_{k + 1}}\left( x \right) + {T_{k - 1}}\left( x \right) = 2\cos \left( {k\theta } \right)\cos \theta = 2x{T_k}\left( x \right)\]\textbf{(iii).} To obtain the roots we set $\cos \left( {k\arccos x} \right) = 0$ which leads to \[k\arccos x = \frac{{\left( {2j + 1} \right)\pi }}{{2}},j = 0, \pm 1, \pm 2, \ldots \]If we solve for $x$, we obtain \[x = \cos \frac{{\left( {2j + 1} \right)\pi }}{{2k}},j = 0, \pm 1, \pm 2, \ldots \]leads to the roots\[x_j^* = \cos \frac{{\left( {2j + 1} \right)\pi }}{{2k}},j = 0,1, \ldots ,k - 1\]\textbf{(iv).} Use induction. ${T_1}\left( x \right) = {2^0}x,{T_2}\left( x \right) = {2^1}{x^2} - 1$ so \textbf{(iv)} is true for $k=1,2$. Assume

Page 31: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

\[{T_k}(x)= {2^{k - 1}}{x^k} + \sum\limits_{i = 0}^{k - 1} {{c_i}{x^i}} ,\forall k = 1,2, \ldots ,n\]Use \textbf{(ii)} we write\[{T_{n + 1}}\left( x \right) = 2x{T_n}\left( x \right) - {T_{n - 1}}\left( x \right) = {2^n}{x^{n + 1}} + \sum\limits_{i = 0}^n {{a_i}{x^i}} \]This establish \textbf{(iv)}, i.e., \[{T_k} = {2^{k - 1}}\prod\limits_{j = 0}^{k - 1} {\left( {x - x_j^*} \right)} \]\textbf{(v).} Applying \textbf{(iv).} $\square$\\\\\textbf{Corollary.}\textit{ If ${\tilde T_n}\left( x \right)$ is the monic Chebyshev polynomial of degree $n$, then\[\mathop {\max }\limits_{ - 1 \le x \le 1} \left| {{{\tilde Q}_n}\left( x \right)} \right| \ge \mathop {\max }\limits_{ - 1 \le x \le 1} \left| {{{\tilde T}_n}\left( x \right)} \right| = \frac{1}{{{2^{n - 1}}}},\forall {\tilde Q_n} \in {\mathcal{\widetilde P}_n}\]}\\\textsc{Proof.} Assume there is another monic polynomial ${\tilde R_n}\left( x \right)$ such that\[\mathop {\max }\limits_{ - 1 \le x \le 1} \left| {{{\tilde R}_n}\left( x \right)} \right| < \frac{1}{{{2^{n - 1}}}}\]The $n-1$-degree polynomial ${d_{n - 1}}\left( x \right) = {\tilde T_n}\left( x \right) - {\tilde R_n}\left( x \right)$ satisfies \[{d_{n - 1}}\left( {{z_0}} \right) > 0,{d_{n - 1}}\left( {{z_1}} \right) < 0,{d_{n - 1}}\left( {{z_2}} \right) > 0,{d_{n - 1}}\left( {{z_3}} \right) < 0\]So $d_{n-1}(x)$ changes sign between each pair $z_k$ and $z_{k+1}, k=0,1,2,\hdots,n$ and thus has $n$ roots. Thus ${d_{n - 1}} \equiv 0$, i.e., ${\tilde T_n} \equiv {\tilde R_n}$. This leads to a contradiction with the assumption above. $\square$\\\\\textsc{Application to interpolation.}\\Let $p_n(x) \in \mathcal{P}_n$ interpolate $f(x) \in C^{n+1}[-1,1]$ at the roots of $T_{n+1}(x), x_j^*,j=0,1,2,\hdots,n$. Thus, we can write the interpolation error formula as\[f\left( x \right) - {p_n}\left( x \right) = \frac{{{f^{\left( {n + 1} \right)}}\left( {\xi \left( x \right)} \right)}}{{\left( {n + 1} \right)!}}{\tilde T_{n + 1}}\left( x \right)\]Using \textbf{(v).} from the previous theorem and assuming \[{\left\| {{f^{\left( {n + 1} \right)}}} \right\|_{\infty ,\left[ { - 1,1} \right]}} \le {M_{n + 1}}\]we obtain\[\mathop {\max }\limits_{x \in \left[ { - 1,1} \right]} \left| {f\left( x \right) - {p_n}\left( x \right)} \right| \le \frac{{{M_{n + 1}}}}{{{2^n}\left( {n + 1} \right)!}}\]\textsc{Remark.} \begin{enumerate}\item We note that this choice of interpolation points reduces the error significantly.\item With Chebyshev points, $p_n$ converges uniformly to $f$ when $f \in C^1[-1,1]$ only. The function $f$ does not have to be analytic.\end{enumerate}\textsc{Chebyshev points on $[a,b]$.}\\Chebyshev points can be used on an arbitrary interval $[a,b]$ using the linear transformation \begin{equation}\label{1}x = \frac{{a + b}}{2} + \frac{{b - a}}{2}t, - 1 \le t \le 1\end{equation}

Page 32: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

We also need the inverse mapping\begin{equation}\label{2}t = 2\frac{{x - a}}{{b - a}} - 1,a \le x \le b\end{equation}First, we order the Chebyshev nodes in $[-1,1]$ as\[t_k^* = \cos \left( {\frac{{2k + 1}}{{2n + 2}}\pi - \pi } \right) = - \cos \left( {\frac{{2k + 1}}{{2n + 2}}\pi } \right),k = 0,1,2, \ldots ,n\]we define the interpolation nodes on an arbitrary interval $[a,b]$ as\[x_k^* = \frac{{a + b}}{2} + \frac{{b - a}}{2}t_k^*,k = 0,1,2, \ldots ,n\]\textsc{Remark.} \begin{enumerate}\item $x_0^* < x_1^* < \cdots < x_n^*$.\item $x_k^*$ are symmetric with respect to the center $\frac{{a + b}}{2}$.\item $x_k^*$ are independent of the interpolated function $f$.\end{enumerate}\textbf{Theorem.} \textit{Let $f \in C^{n+1}[a,b]$ and $p_n$ interpolate $f$ at the Chebyshev nodes $x_k^*,k=0,1,2,\hdots,n$ in $[a,b]$. Then\[\mathop {\max }\limits_{x \in \left[ {a,b} \right]} \left| {f\left( x \right) - {p_n}\left( x \right)} \right| \le 2{M_{n + 1}}\frac{{{{\left( {b - a} \right)}^{n + 1}}}}{{{4^{n + 1}}\left( {n + 1} \right)!}}\]where \[{M_{n + 1}} = \mathop {\max }\limits_{x \in \left[ {a,b} \right]} \left| {{f^{\left( {n + 1} \right)}}\left( x \right)} \right|\]}\\\textsc{Proof.} If suffices to rewrite $W\left( x \right) = \prod\limits_{i = 0}^n {\left( {x - x_i^*} \right)} $ using the mapping (\ref{1}) and (\ref{2}) to find that\[x - x_i^* = \frac{{b - a}}{2}\left( {t - t_i^*} \right)\]and\[W\left( x \right) = \prod\limits_{i = 0}^n {\left( {x - x_i^*} \right)} = {\left( {\frac{{b - a}}{2}} \right)^{n + 1}}\prod\limits_{i = 0}^n {\left( {t - t_i^*} \right)} = {\left( {\frac{{b - a}}{2}} \right)^{n + 1}}{\tilde T_{n + 1}}\left( t \right)\]Finally, using ${\left\| {{{\tilde T}_{n + 1}}} \right\|_{\infty ,\left[ { - 1,1} \right]}} = \frac{1}{{{2^n}}}$ we complete proof. $\square$\section{Hermite interpolation}\textsc{The general Hermite interpolation.}\\Letting $x_0<x_1<\hdots<x_m$ be $m+1$ distinct points such that\begin{equation}\label{3}{f^{\left( k \right)}}\left( {{x_i}} \right) = p_n^{\left( k \right)}\left( {{x_i}} \right),k = 0,1, \ldots ,{n_i} - 1,i = 0,1, \ldots ,m\end{equation}where $\sum\limits_{i = 0}^m {{n_i} = n + 1 \mbox{ and }{n_i} \ge 1} $. Note that $n_i=1,i=0,\hdots,m$ leads to Lagrange interpolation.\subsection{Lagrange form of Hermite interpolation polynomials}\textbf{Theorem.} \textit{There exists a unique polynomial $p_n(x)$ that satisfies (\ref{3}) with $n_i=2$ and $n=2m+1$.}\subsection{Newton form of Hermite interpolation polynomial}Using the following relation\begin{equation}f\left[ {{x_0},{x_0} + h,{x_0} + 2h, \ldots ,{x_0} + kh} \right] = \frac{{{f^{\left( k \right)}}\left( \xi \right)}}{{k!}},{x_0} < \xi < {x_0} + kh\end{equation}and taking the limite when $h \to 0$ we obtain that\begin{equation}

Page 33: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

f\left[ {{x_0},{x_0},{x_0}, \ldots ,{x_0}} \right] = \frac{{{f^{\left( k \right)}}\left( {{x_0}} \right)}}{{k!}}\end{equation}The general formula for divided differences with repeated arguments for $x_0 \le x_1 \le \hdots \le x_n$ is given by\begin{equation}f\left[ {{x_i},{x_{i + 1}}, \ldots ,{x_{i + k}}} \right] = \left\{ {\begin{array}{*{20}{l}}{\frac{{f\left[ {{x_{i + 1}}, \ldots ,{x_{i + k}}} \right] - f\left[ {{x_i}, \ldots ,{x_{i + k - 1}}} \right]}}{{{x_{i + k}} - {x_i}}},\mbox{ if } {x_i} \ne {x_{i + k}}}\\{\frac{{{f^{\left( k \right)}}\left( \xi \right)}}{{k!}}}\end{array}} \right.\end{equation}\subsection{Hermite interpolation error}\textbf{Theorem.} \textit{Let $f(x) \in C[a,b]$ be $2m+2$ differentiable on $(a,b)$ and consider $x_0<x_1<\hdots<x_m$ in $[a,b]$ with $n_i=2,i=0,1,\hdots,m$. If $p_{2m+1}$ is the Hermite polynomial such that $p_{2m + 1}^{\left( k \right)}\left( {{x_i}} \right) = {f^{\left( k \right)}}\left( {{x_i}} \right),i = 0,1, \ldots ,m;k = 0,1$, then there exist $\xi(x) \in [a,b]$ such that\begin{equation}f\left( x \right) - {p_{2m + 1}}\left( x \right) = \frac{{{f^{\left( {2m + 2} \right)}}\left( {\xi \left( x \right)} \right)}}{{\left( {2m + 2} \right)!}}W\left( x \right)\end{equation}where\begin{equation}W\left( x \right) = \prod\limits_{i = 0}^m {{{\left( {x - {x_i}} \right)}^2}} \end{equation}}\\\textsc{Proof.} We consider the special case $n_i=2$, i.e., $n=2m+1$, select an arbitrary point $x \in [a,b], x \ne x_i, i=0,\hdots,m$ and define the function\begin{equation}\label{4}g\left( t \right) = f\left( t \right) - {p_{2m + 1}}\left( t \right) - \frac{{f\left( x \right) - {p_{2m + 1}}\left( x \right)}}{{W\left( x \right)}}W\left( t \right)\end{equation}Note that $g$ has $m+2$ roots, i.e., $g(x_i)=0,i=0,1,\hdots,m$ and $g(x)=0$. Applying the generalized Rolle's Theorem, obtain\[g'\left( {{\xi _i}} \right) = 0,i = 0,1, \ldots ,m \mbox{ where } {\xi _i} \in \left[ {a,b} \right],{\xi _i} \ne {x_j},{\xi _i} \ne x\]And $g'(x_i)=0,i=0,1,\hdots,m$, Thus, $g'(t)$ has $2m+2$ roots in $[a,b]$. Applying the generalized Rolle's Theorem, there exists $\xi \in (a,b)$ such that\[{g^{\left( {2m + 2} \right)}}\left( \xi \right) = 0\]Combining this with (\ref{4}) yields\[0 = {f^{\left( {2m + 2} \right)}}\left( \xi \right) - \frac{{f\left( x \right) - {p_{2m + 1}}\left( x \right)}}{{W\left( x \right)}}\left( {2m + 2} \right)!\]Solving for $f(x)-p_{2m+1}(x)$ leads to results. $\square$\\\\\textbf{Corollary.} \textit{If $f(x)$ and $p_{2m+1}(x)$ are as in the previous theorem, then\begin{equation}

Page 34: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

\left| {f\left( x \right) - {p_{2m + 1}}\left( x \right)} \right| < \frac{{{M_{2m + 2}}}}{{\left( {2m + 2} \right)!}}\left| {W\left( x \right)} \right|,x \in \left[ {a,b} \right]\end{equation}and\begin{equation}{\left\| {f\left( x \right) - {p_{2m + 1}}\left( x \right)} \right\|_{\infty ,\left[ {a,b} \right]}} \le \frac{{{M_{2m + 2}}}}{{\left( {2m + 2} \right)!}}{\left( {b - a} \right)^{2m + 2}}\end{equation}}\section{Spline Interpolation}Study piecewise polynomial interpolation and write the interpolation errors in terms of the subdivision size and the degree of polynomials.\subsection{Piecewise Lagrange interpolation}We construct the piecewise linear interpolation for the data $(x_i,f(x_i)),i=0,1,\hdots,n$ such that $x_0<x_1<\hdots<x_n$ as \begin{equation}{P_1}\left( x \right) = \left\{ {\begin{array}{*{20}{l}}{{p_{1,0}}\left( x \right) = f\left( {{x_0}} \right)\frac{{x - {x_1}}}{{{x_0} - {x_1}}} + f\left( {{x_1}} \right)\frac{{x - {x_0}}}{{{x_1} - {x_0}}},x \in \left[ {{x_0},{x_1}} \right]}\\\begin{array}{l}{p_{1,i}}\left( x \right) = f\left( {{x_i}} \right)\frac{{x - {x_{i + 1}}}}{{{x_i} - {x_{i + 1}}}} + f\left( {{x_{i + 1}}} \right)\frac{{x - {x_i}}}{{{x_{i + 1}} - {x_i}}},x \in \left[ {{x_i},{x_{i + 1}}} \right],\\\mbox{ for }i = 0,1, \ldots ,n - 1\end{array}\end{array}} \right.\end{equation}The interpolation error on $(x_i,x_{i+1})$ is bounded as\begin{equation}\left\| {{E_{1,i}}\left( x \right)} \right\| \le \frac{{{M_{2,i}}}}{2}\left| {\left( {x - {x_i}} \right)\left( {x - {x_{i + 1}}} \right)} \right|,x \in \left( {{x_i},{x_{i + 1}}} \right)\end{equation}where ${M_{2,i}} = \mathop {\max }\limits_{{x_i} \le x \le {x_{i + 1}}} \left| {{f^{\left( 2 \right)}}\left( x \right)} \right|$. This can be written as\begin{equation}{\left\| {{E_{1,i}}} \right\|_\infty } \le \frac{{{M_{2,i}}h_i^2}}{8},{h_i} = {x_{i + 1}} - {x_i}\end{equation}The global error is\begin{equation}{\left\| {{E_1}} \right\|_\infty } \le \frac{{{M_2}{H^2}}}{8},H = \mathop {\max }\limits_{i = 0, \ldots ,n - 1} {h_i}\end{equation}\textbf{Theorem.} \textit{Let $f \in C^0[a,b]$ be twice differentiable on $(a,b)$. If $P_1(x)$ is the piecewise linear interpolant of $f$ at $x_i=a+ih,i=0,1,\hdots,n$ and $h = \frac{{b - a}}{n}$, then $P_1$ converges uniformly to $f$ as $n \to \infty$.}\\\\\textsc{Proof.} For piecewise quadratic interpolation we select $h = \frac{{b - a}}{n}$ and construct $p_{2,i}$ that interpolates $f$ at ${x_i},\frac{{{x_i} + {x_{i + 1}}}}{2},{x_{i + 1}}$. In this case the interpolation error is bounded as\begin{equation}

Page 35: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

{\left\| {{E_{2,i}}} \right\|_\infty } \le \frac{{{M_{3,i}}{{\left( {\frac{{{h_i}}}{2}} \right)}^3}}}{{9\sqrt 3 }},{h_i} = {x_{i + 1}} - {x_i}\end{equation}A global bound is\begin{equation}{\left\| {{E_2}} \right\|_\infty } \le \frac{{{M_3}{{\left( {\frac{H}{2}} \right)}^3}}}{{9\sqrt 3 }},H = \mathop {\max }\limits_{i = 0, \ldots ,2n - 1} {h_i}\end{equation}$P_1$ converges uniformly to $f$ as $n \to \infty$. $\square$\\\\\textbf{Theorem.} \textit{Let $f \in C^0[a,b]$ and be $m+1$ times differentiable on $(a,b)$ and $x_0<x_1<\hdots<x_n$ with $h_i=x_{i+1}-x_i$ and $H = \mathop {\max }\limits_i {h_i}$. If $P_m(x)$ is the piecewise polynomial of degree $m$ on each subinterval $[x_i,x_{i+1}$ and $P_m(x)$ interpolates $f$ on $[x_i,x_{i+1}$ at ${x_{i,k}} = {x_i} + k{\tilde h_i},k = 0,1, \ldots ,m;{\tilde h_i} = \frac{{{h_i}}}{m}$ then $P_m$ converges uniformly to $f$ as $H \to f$.}\\\\\textsc{Proof.} Using the error bound\begin{equation}{\left\| {{E_m}} \right\|_\infty } \le \frac{{{M_{m + 1}}{H^{m + 1}}}}{{\left( {m + 1} \right)!}},H = \mathop {\max }\limits_{i = 0,1, \ldots ,nm - 1} {h_i}\end{equation}Done. $\square$\subsection{Cubic spline interpolation}We use piecewise polynomials of degree three that are $C^2$ and interpolate the data such as $S(x_k)=f(x_k)=y_k,k=0,1,\hdots,n$.\\\\\textsc{Algorithm.}\begin{description}\item[Step 1.] Order the points $x_k,k=0,1,\hdots,n$ such that\[a = {x_0} < {x_1} < \cdots < {x_{n - 1}} < {x_n} = b\]\item[Step 2.] Let $S(x)$ be a piecewise spline defined by $n$ cubic polynomials such that\[S\left( x \right) = {S_k}\left( x \right) = {a_k} + {b_k}\left( {x - {x_k}} \right) + {c_k}{\left( {x - {x_k}} \right)^2} + {d_k}{\left( {x - {x_k}} \right)^3},{x_k} \le x \le {x_{k + 1}}\]\item[Step 3.] Find $a_k,b_k,c_k,d_k,k=0,1,\hdots,n-1$ such that\begin{enumerate}\item $S\left( {{x_k}} \right) = {y_k},k = 0,1, \ldots ,n$\item ${S_k}\left( {{x_{k + 1}}} \right) = {S_{k + 1}}\left( {{x_{k + 1}}} \right),k = 0,1, \ldots ,n - 2$\item ${S_k}'\left( {{x_{k + 1}}} \right) = {S_{k + 1}}'\left( {{x_{k + 1}}} \right),k = 0,1, \ldots ,n - 2$\item ${S_k}''\left( {{x_{k + 1}}} \right) = {S_{k + 1}}''\left( {{x_{k + 1}}} \right),k = 0,1, \ldots ,n - 2$\end{enumerate}\end{description}\textbf{Theorem.} \textit{If $A$ is an $n \times n$ strictly diagonally dominant matrix, i.e., $\left| {{a_{kk}}} \right| > \sum\limits_{i = 1,i \ne k}^n {\left| {{a_{ki}}} \right|} ,k = 1,2, \ldots ,n$, then $A$ is nonsingular.}\\\\\textsc{Proof.} By contradiction, we assume that $A$ is singular, i.e., there exists a nonzero vector $\mathbf{x}$ such that $A\mathbf{x}=$ and let $x_k$

Page 36: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

such that $\left| {{x_k}} \right| = \max \left| {{x_i}} \right|$. This leads to\[{a_{kk}}{x_k} = - \sum\limits_{i = 1,i \ne k}^n {{a_{ki}}{x_i}} \]Taking the absolute value and using the triangle inequality we obtain\[\left| {{a_{kk}}} \right|\left| {{x_k}} \right| = \sum\limits_{i = 1,i \ne k}^n {\left| {{a_{ki}}} \right|\left| {{x_i}} \right|} \]Dividing both terms by $\left| {{x_k}} \right|$ we get\[\left| {{a_{kk}}} \right| = \sum\limits_{i = 1,i \ne k}^n {\left| {{a_{ki}}} \right|\frac{{\left| {{x_i}} \right|}}{{\left| {{x_k}} \right|}}} \le \sum\limits_{i = 1,i \ne k}^n {\left| {{a_{ki}}} \right|} \]This leads to a contradiction since $A$ is strictly diagonally dominant. $\square$\\\\\textbf{Theorem.} \textit{Let us consider the set of data points $(x_i,f(x_i)),i=0,1,\hdots,n$, such that $x_0<x_1<\hdots<x_n$. If $S''(x_0)=S''(x_n)=0$, then there exists a unique piecewise cubic polynomial that satisfies the condition \textbf{Step 3.}}\\\\\\\begin{center}\textsc{The end}\end{center}\newpage\begin{thebibliography}{999}\bibitem {1} Fran\c{c}ois Dubois, \textit{Hermite interpolation for the approximation of ordinary differential equations}, Conservatoire National des Arts et M\'{e}tiers, Paris, France, 02 May 2008.\bibitem {2} Zbyn\u{e}k \u{S}\'{i}r, \textit{Hermite interpolation by planar biarcs}. \bibitem {3} BAI Hong-huan, XU Ai-min, \textit{Hermite interpolation and its numerical differentiation formula involving symmetric functions}, Appl. Math. J. Chinese Univ. 2009, 24(3): 309-314.\bibitem {4} Garret Sobczyk, \textit{Quantum Hermite interpolation polynomials}, 2000 Mathematics Subject Classification. 41 Approximations and expansions.\bibitem {5} Amos Ron, Yunpeng Li, Mark Cowlishaw, Nathanael Fillmore, \textit{Lecture 13: Cubic Hermite Spline Interpolation II, Introduction to numerical analysis}.\bibitem {6} G. E. Okecha, \textit{Hermite interpolation and a method for evaluating Cauchy principal value integrals of oscillatory kind}, Department of Mathematics, University of Fort Hare, Alice 5700, South Africa, Kragujevac J. Math. 29 (2006) 91-98, March 8, 2006.\bibitem {7} David L. Finn, \textit{MA 323 Geometric Modelling, Course Notes: Day 09, Quintic Hermite Interpolation}, December 13th, 2004.\bibitem {8} Marjeta Krajnc, \textit{Geometric Hermite interpolation by cubic $G^1$ splines}, Institute of Mathematics, Physics and Mechanics, University of Jadranska 19, 1000 Ljubljana, Slovenia, 4 September 2007.\bibitem {9} Zuojin Zhu, Xiangqi Wang, Zhenbi Su, \textit{Numerical error patterns for a scheme with Hermite interpolation for $1+1$ linear wave equations}, Department of Thermal Science and Energy Engineering; University of Science and Technology of China, Anhui, Hefei, 230026, People’s Republic of China, COMMUNICATIONS IN NUMERICAL METHODS IN ENGINEERING Commun. Numer. Meth. Engng 2004; 20:353–361 (DOI: 10.1002/cnm.678)\bibitem {10} M. A. Navascu\'{e}s, M. V. Sebasti\'{a}n, \textit{Generalization of Hermite functions by fractal interpolation}, \textit{Hermite Fractal Interpolation}, Departamento de Matem\'{a}tica Aplicada - Centro

Page 37: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

Polit\'{e}cnico Superior de Ingenieros Universidad de Zaragoza. C/ Mar\'{i}a de Luna, 3. 50018 Zaragoza, Espa\~{n}a, Departamento de Matem\'{a}ticas - Facultad de Ciencias Universidad de Zaragoza. Pza. San FranciscO. 50009 Zaragoza, Espa\~{n}a.\bibitem {11} Ben-yu Guo, Jie Shen, Cheng-long Xu, \textit{Spectral and pseudospectral approximations using Hermite functions: application to the Dirac equation}, School of Mathematical Sciences, Shanghai Normal University, Shanghai, 200234, P.R. China; Department of Mathematics, Xiamen University, Xiamen, 361005, P.R. China; Department of Mathematics, Purdue University, West Lafayette, IN 47907, USA; Department of Applied Mathematics, Tongji University, Shanghai, 200092, P.R. China; Advances in Computational Mathematics 19: 35–55, 2003.\bibitem {12} Jim Lambers, \textit{Hermite Interpolation}, MAT 772, Fall Semester 2010-2011, Lecture 6 Notes.\bibitem {13} Milan A. Kova\v{c}evi\'{c}, \textit{A simple algorithm for the construction of Lagrange and Hermite interpolating polynomial in two variables}, FACTA UNIVERSITATIS (NI\v{S}) Ser. Math. Inform. Vol. 22, No. 2 (2007), pp. 165-174, 2000 Mathematics Subject Classification. Primary 41A05; Secondary 65D05 20 June 2007.\bibitem {14} W. F. Finden, \textit{An error term and uniqueness for Hermite-Birkhoff interpolation involving only function values and/or first derivative values}, Saint Mary's University, Halifax, Canada, January 12, 2006.\bibitem {15} M. D\ae hlen, T. Lyche, K. M$\phi$rken, H-P. Seidel, \textit{Multiresolution analysis based on quadratic Hermite interpolation - Part 1: Piecewise polynomial Curves}.\bibitem {16} Fred J. Hickernell, Shijun Yang, \textit{Explicit Hermite interpolating polynomials via the cycle index with applications}, Institute for Scientific Computing and Information, International Journal of Numerical Analysis and Modeling, Volume 5, Number 3, Pages 457-465.\bibitem {17} Zhaoliang Meng, Zhongxuan Luo, \textit{On the singularity of multivariate Hermite interpolation}, School of Mathematical Sciences,Dalian University of Technology, Dalian, 116024, China; School of Software,Dalian University of Technology, Dalian, 116620, China, October 29, 2013.\bibitem {18} Yongfend Gu, Tom VanCourt, Martin C. Herbordt, \textit{Improved interpolation and system integration for FPGA-based molecular dynamics simulations}, Department of Electrical and Computer Engineering Boston University; Boston, MA 02215.\bibitem {19} Fadoua Balabdaoui, Jon A. Wellner, \textit{Conjecture of error boundedness in a new Hermite interpolation problem via splines odd-degree}, Georg-August Universit\"{a}t and University of Washington, May 18, 2005.\bibitem {20} J. F. Traub, \textit{On Lagrange-Hermite interpolation}, J. Soc. Indust. Appl. Math. Vol 12, No. 4, December, 1964.\bibitem {21} Ciro Ciliberto, Francesca Cioffi, Rick Miranda, Ferruccio Orecchia, \textit{Bivariate Hermite interpolation and linear systems of plane curves with base fat points}, Italy.\bibitem {22} Borislav Bojanov, Yuan Xu, \textit{On a Hermite interpolation by polynomials of two variables}.\bibitem {23} A. K. B. Chand, M. A. Navascu\'{e}s, \textit{Generalized Hermite fractal interpolation}, Dept. of Mathematics, Indian Institute of Technology Madras, Chennai 600036, India. Depto. de Matem\'{a}tica Aplicada, CPS, Universidad de Zaragoza, Mar\'{i}a de Luna 3, Zaragoza 50018, Spain.\bibitem {24} Ar\'{e}valo, F\"{u}hrer, \textit{Numerical Approximation}, Lund University.\bibitem {25} Alex Alon, Sven Bergmann, \textit{Generic smooth connection functions: a new analytic approach to Hermite interpolation}, Institute of Physics publishing, J. Phys. A: Math. Gen. 35 (2002) 3877-3898, Journal of Physics A: Mathematical and General, 19 April 2002.

Page 38: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

\bibitem {26} Katherine Bal\'{a}zs, \textit{Lagrange and Hermite interpolation processes on the positive real line}, Karl Marx University, Institute of Mathematics, Budapest 5, Hungary, June 11, 1985.\bibitem {27} W. H. Enright, \textit{CSCC51H - Numerical approximation, integration and ordinary differential equations}, Computer and Mathematical Sciences Division, University of Toronto at Scarborough. \bibitem {28} J. F. Traub, \textit{On Lagrange-Hermite interpolation}, J. Soc. Indust. Appl. Math. Vol. 12, No. 4, December, 1964.\bibitem {29} Robert M. Corless, Azar Shakoori, D. A. Aruliah, Laureano Gonzalez-Vega, \textit{Barycentric Hermite interpolants for event location in initial-value problems}, European Society of Computational Methods in Sciences and Engineering (ESCMSE), Journal of Numerical Analysis, Industrial and Applied Mathematics (JNAIAM) vol. 1, no. 1, 2007, pp. 1-14, ISSN 1790-8140.\bibitem {30} Luis E. Tob\'{o}n, \textit{Newton, Lagrange and Hermite interpolation: convergence and Runge phenomena}, January 19, 2011.\bibitem {31} R. Szeliski, M. R. Ito, \textit{New Hermite cubic interpolator for two-dimensional curve generation}, 12th March 1985.\bibitem {32} Daniel Appel\"{o}, Thomas Hagstrom, \textit{On advection by Hermite methods}, Department of Mathematics and Statistics, The University of New Mexico, 1 University of New Mexico, Albuquerque, Department of Mathematics and Statistics, The University of New Mexico, 1 University of New Mexico, Albuquerque, Dallas.\bibitem {33} S. Adjerid, \textit{Notes for Numerical Analysis Math 5466}, Virginia Polytechnic Institute and State University.\bibitem {34} Fassbender Heike, Perez Alvaro Javier, Shayanfar Nikta, \textit{A sparse linearization for Hermite interpolation matrix polynomials}, Manchester Institute for Mathematical Sciences School of Mathematics, The University of Manchester, November 2015.\bibitem {35} Pierre B\'{e}zier, \textit{Interpolation and Curve Fitting}.\bibitem {36} Yongyang Cai, Kenneth L. Judd, \textit{Dynamic Programming with Hermite Interpolation}, Hoover Institution, 424 Galvez Mall, Standford University, Standford, CA, 94305, February 13, 2012.\bibitem {37} G. M\"{u}hlbach, \textit{An Algorithm Approach to Hermite-Birkhoff-Interpolation}, Numer. Math. 37, 339-347 (1981)\bibitem {38} Ji\v{r}\'{i} Kosinka, Bert J\"{u}ttler, \textit{$C^1$ Hermite Interpolation by Pythagorean Hodograph Quintics in Minkowski space}, Johannes Kepler University, Institute of Applied Geometry, Altenberger Str. 69, A–4040 Linz, Austria.\bibitem {39} Hans-Bernd Knoop, \textit{On Hermite interpolation in normed vector spaces}, Institut f\"{u}r Mathematik der Ruhr-Universit\"{a}t Bochum, Bochum, West Germany.\bibitem {40} Biancamaria Della Vecchia, Giuseppe Mastroianni, P\'{e}ter V\'{e}rtesi, \textit{Simultaneous approximation by Hermite interpolation of higher order}, 23 October 1992.\bibitem {41} Michael T. Heath, \textit{Scientific Computing: An Introductory Survey}, Department of Computer Science, University of Illinois at Urbana-Champaign, 2002.\bibitem {42} Charlie C. L. Wang, Kai Tang, \textit{Non-self-overlapping Hermite interpolating mapping: a practical solution for structured quadrilateral meshing}, Department of Automation and Computer-Aided Engineering, The Chinese University of Hong Kong,Shatin, N.T., Hong Kong, Department of Mechanical Engineering, The Hong Kong University of Science and Technology,Clear Water Bay, N.T., Hong Kong.\bibitem {43} Costanza Conti, Lucia Romani, Michael Unser, \textit{Ellipse-preserving Hermite interpolation and subdivision}, Journal of Mathematical Analysis and Applications.

Page 39: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

\bibitem {44} Olli Niemitalo, \textit{Polynomial interpolators for high-quality resampling of oversampled audio}, October 2001.\bibitem {45} D. Levy, \textit{Interpolation}.\bibitem {46} P. M. Prenter, \textit{Lagrange and Hermite interpolation in Banach spaces}, Colorado State University, Fort Collins, Colorado 80521, and Mathematics Research Center, University of Wisconsin, Madison, Wisconsin 53706, Journal of Approximation theory 4, 419-432 (1971), April 10, 1970.\bibitem {47} Ga\v{s}per Jakli\v{c}, Jernej Kozak, Marjeta Krajnc, Vito Vitrih, Emil \v{Z}agar, \textit{Hermite geometric interpolation by rational B\'{e}zier spatial curves}, University of Ljubljana, University of Primorska.\bibitem {48} Rekha P. Kulkarni, \textit{Elementary Numerical Analysis, Cubic Spline Interpolation}, Department of Mathematics, Indian Institute of Technology, Bombay.\bibitem {49} James M. Hyman, \textit{Accurate Monotonicity preserving cubic interpolation}, SIAM J. SCI. STAT. COMPUT. Vol. 4, No. 4, December 1983, 1983 Society for Industrial and Applied Mathematics.\bibitem {50} H. J. Oberle, H. J. Pesch, \textit{Numerical Treatment of delay differential equations by Hermite interpolation}, Institut f\"{u}r Mathematik der Technischen Universit\"{a}t M\"{u}nchen, Arcisstr. 21, D-8000 M\"{u}nchen 2, Germany (Fed Rep.)\bibitem {51} Milan Kub\'{i}\v{c}ek, Drahoslava Janovsk\'{a}, Miroslava Dubcov\'{a}, \textit{Numerical methods and algorithms}, 2005.\bibitem {52} Nicolas Crouseilles, Guillaume Latu, Eric Sonnendr\"{u}cker, \textit{Hermite Spline interpolation on patches for parallelly solving the Vlasov-Poisson equation}, Int. J. Appl. Math. Comput. Sci., 2007, Vol. 17, No. 3, 335–349.\bibitem {53} FuSen Lin, \textit{Chapter 3: Piecewise polynomial interpolation}, Department of Computer Science and Engineering National Taiwan Ocean University, Scientific Computing, Fall 2007.\bibitem {54} Chandrajit L. Bajaj, Insung Ihm, \textit{Algebraic Surface design with Hermite interpolation}, Purdue University, Purdue e-Pubs, Computer Science Technical Reports, Department of Computer Science, 1990.\bibitem {55} \"{O}mer E\u{g}eciou\u{g}lu, E. Gallopoulos, \c{C}etin K. Ko\c{c} \textit{Fast Computation of Divided Differences and Parallel Hermite interpolation}, Department of Computer Science, University of California, Santa Barbara, California, Center for Supercomputing Research and Development and Department of ComputerScience, University of Illinois, Urbana-Champaign, Illinois, Department of Electrical Engineering, University of Houston, Houston, Texas, 16, 1989.\bibitem {56} Michael S. Floater, Tatiana Surazhsky, \textit{Parameterization for curve interpolation}, Centre of Mathematics for Applications, Department of Informatics, University of Oslo, P.O. Box 1035, 0316 Oslo, Norway, 2005.\bibitem {57} A. C. Ahlin, \textit{A bivariate generalization of Hermite's interpolation formula}, 1963.\bibitem {58} Torsten M\"{o}ller, \textit{Interpolation Techniques}, CMPT 466, Computer Animation.\bibitem {59} Yukinori Iwashita, \textit{OpenGamma Quantitative Research, Piecewise polynomial interpolations}, OpenGamma, November 19, 2014.\bibitem {60} Weinan E, Tiejun Li, \textit{Lecture 10 Polynomial interpolation}, Department of Mathematics, Princeton University, School of Mathematical Sciences, Peking University.\bibitem {61} Peter Philip, \textit{Numerical Analysis I}, April 10, 2015.\bibitem {62} Burhan A. Sadiq, \textit{Finite difference methods, Hermite interpolation and a Quasi-uniform spectral scheme (QUSS)}, University of Michigan, 2013.\bibitem {63} Ji\v{r}\'{i} Kosinka, Bert J\"{u}ttler, \textit{$G^1$ Hermite interpolation by Minkowski Pythagorean Hodograph Cubics}, Johannes Kepler

Page 40: Web viewFor monomial basis, matrix $A$ is increasingly ill-conditioned as degree increases. Ill-conditioning does not prevent fitting data points well, since residual for

University, Institute of Applied Geometry, Altenberger Str. 69, A-4040 Linz, Austria, 30 October 2006.\bibitem {64} Joe Mahaffy, \textit{Numerical Analysis and Computing, Lecture Notes 5 - Interpolation and Polynomial Approximation, Divided differences, and Hermite interpolatory polynomials}, Department of Mathematics, Dynamical Systems Group, Computational Sciences Research Center, San Diego State University, Spring 2010.\bibitem {65} Attila M\'{a}t\'{e}, \textit{Introduction to Numerical Analysis with C programs}, Brooklyn College of the City University of New York, July 2004.\bibitem {66} G. Birkhoff, M. H. Schultz, R. S. Varga, \textit{Piecewise Hermite interpolation in one and two variables with applications to partial differential equations}, Harvard University, Case Western Reserve University, July 31, 1967.\bibitem {67} P. Bouboulis, L. Dalla, M. Kostaki-Kosta, \textit{Construction of smooth fractal surfaces using Hermite fractal interpolation functions}.\bibitem {68} Don Fussell, \textit{Interpolating curves}, University of Texas at Austin, Computer Graphics, Fall 2010.\bibitem {69} Renzhong Feng, Yanan Zhang, \textit{Piecewise Bivariate Hermite interpolations for large sets of scattered data}, School of Mathematics and Systematic Science, Beijing University of Aeronautics and Astronautics, Beijing 100191, China; Key Laboratory of Mathematics, Informatics and Behavioral Semantics, Ministry of Education, Beijing 100191, China.\bibitem {70} Jianbao Wu, \textit{Spherical splines for Hermite interpolation and surface design}, Zhejiang University, Athens, Georgia, 2007.\bibitem {71} Simon Fuhrmann, Michael Kazhdan, Michael Goesele, \textit{Accurate Isosurface interpolation with Hermite data}.\end{thebibliography}\end{document}