Lecture 004
-
Upload
mahmoud-el-mahdy -
Category
Documents
-
view
215 -
download
0
Transcript of Lecture 004
-
7/29/2019 Lecture 004
1/6
Polynomial Approximation by Least
Squares
Distance in a Vector Space
The 2-norm of an integrable function f over an interval a, b is f 22 =
a
b
fx 2 x.Least square approximation
f - pn 22 = ab
fx - pnx2 xwith respect to all polynomials of degree n.
We use the approximation
pnx =i=0
n
ai xi
and
f - pn 22 = a
b
fx -i=0
n
ai xi
2
x.
The minimum value of the error follows by
d
d a j f - pn 22 = 0 =
d
d a j
a
b
fx -i=0
n
ai xi
2
x
which is
0 =d
d a j
a
b
fx2 - 2 i=0
n
ai xi fx +
i=0
n
ai xi
2
x
0 =d
d a j
a
b
fx2 x - 2 dd a j
i=0
n
ai a
b
xi fx x + dd a j
a
b i=0
n
ai xi
2
x
0 = -2 a
b
xj fx x + 2 i=0
n
ai a
b
xi xj x
0 = -bj +i=1
n+1
ai cj,i with j = 0, 1, 2, , n
-
7/29/2019 Lecture 004
2/6
i=1
n+1
ai cj,i = bj with j = 0, 1, 2, , n
|
2 | Lecture_004.nb 2012 Dr. G. Baumann
-
7/29/2019 Lecture 004
3/6
Matrix Representation
System of determining the coefficients,
c11 c12 c13 c1,n+1
c21 c22 c23 c2,n+1
cn+1,1 cn+1,2 cn+1,3 cn+1,n+1
a0
a1
an
=
b0
b1
bn
.
where
ci,j = a
b
xi+j-2 x =1
i + j - 1bi+j-1 - ai+j-1
and
bi = ab
xi
fx x.While this looks like an easy and straightforward solution to the problem, there are some issues of concern.
|
2012 Dr. G. Baumann Lecture_004.nb | 3
-
7/29/2019 Lecture 004
4/6
Example: Conditioning of Least Square Approximation
If we take a = 0, b = 1, then in the equation of least square approximation is determined by
cij =1
i + j - 1
and the matrix is a Hilbert matrix, which is very ill-conditioned. We should therefore expect that any attempt to
solve the full least square problem or the discrete version is likely to yield disappointing results.
|
4 | Lecture_004.nb 2012 Dr. G. Baumann
-
7/29/2019 Lecture 004
5/6
Example: Least Square Approximation
Given f
x
=
x with x
0, 1
. Find the best approximation using least square approximation to find a second
order polynomial p2x. The matrix c is given bycij =
1
i + j + 1with i, j = 0, 1, 2, , n
bj = 01fx xj x with j = 0, 1, 2, , nThe polynomial is
p2x = a0 + a1 x + a2 x2First let us generate the matrix c by
In[1]:=c Table
1
i j 1,
i, 0, 2
,
j, 0, 2
; c
MatrixForm
Out[1]//MatrixForm=
11
2
1
3
1
2
1
3
1
4
1
3
1
4
1
5
The quantities bj can be collected in a vector of length 3
In[2]:=b Table
0
1
xj x x, j, 0, 2
Out[2]= 1 , 1, 2
The determining system of the coefficients are
In[4]:=eqs Threadc.a0, a1, a2 b; eqs TableForm
Out[4]//TableForm=
a0 a1
2
a2
3 1
a0
2
a1
3
a2
4 1
a0
3
a1
4
a2
5 2
The solution for the coefficients aj follows by applying Gau-elimination
In[6]:=sol Solveeqs, a0, a1, a2 Flatten
Out[6]=a0 3 35 13 , a1 588 216 , a2 570 210
Inserting the found solution into the polynomial p2x delivers
2012 Dr. G. Baumann Lecture_004.nb | 5
-
7/29/2019 Lecture 004
6/6
In[7]:=p2 a0 a1 x a2 x2 . sol
Out[7]=3 35 13 588 216 x 570 210 x2
A graphical comparison of the function fx and p2x shows the quality of the approximationIn[8]:=
PlotEvaluatex, p2, x, 0, 1, AxesLabel "x", "fx,p2x"
Out[8]=
0.2 0.4 0.6 0.8 1.0x
1.5
2.0
2.5
fx,p2x
The absolute deviation ofp2x from f(x) is shown in the following graph giwing the error of the approximationIn[10]:=
PlotEvaluateAbsx p2, x, 0, 1, AxesLabel "x", "fxp2x"
Out[10]=
0.2 0.4 0.6 0.8 1.0x
0.002
0.004
0.006
0.008
0.010
0.012
0.014
fx-p2x
|
6 | Lecture_004.nb 2012 Dr. G. Baumann