LOW RESOURCES HARDWARE IMPLEMENTATION …cee.uma.pt/morgado/Down/NCA_polinomial_v2.pdfDarío...

10
LOW RESOURCES HARDWARE IMPLEMENTATION OF THE HYPERBOLIC TANGENT FOR ARTIFICIAL NEURAL NETWORKS Darío Baptista, Fernando Morgado-Dias Madeira Interactive Technologies Institute and Centro de Competências de Ciências Exactas e da Engenharia, Universidade da Madeira Campus da Penteada, 9000-039 Funchal, Madeira, Portugal. Tel:+351 291-705150/1, Fax: +351 291-705199 Abstract: Artificial Neural Networks are a widespread tool with application in a variety of areas ranging from the social sciences to engineering. Many of these applications have reached a hardware implementation phase and have been documented in scientific papers. Unfortunately, most of the implementations have a simplified hyperbolic tangent replacement which has been the most common problem, as well as the most resource consuming block in terms of hardware. This paper proposes a low resources hardware implementation of the hyperbolic tangent, by using the simplest solution in order to obtain the lowest error possible thus far with a set of 25 polynomials of third order, obtained with Chebyshev interpolations. The results obtained show that the solution proposed holds a low error while simultaneously promising the use of low resources, as only third order polynomials are used. Keywords: Artificial Neural Networks, Function Approximation, Hyperbolic Tangent, Polynomial Approximation. 1. INTRODUCTION Artificial Neural Networks (ANN) have been actively studied since the 1980s. During these decades, a necessity of hardware implementation of the trained ANN arose in some of these applications, substituting the traditional implementation in a computer. One of the first hardware implementations was developed by Nestor and Intel in 1993 for an Optical Character Recognition (OCR), which uses Radial Basis Functions (Dias, 2004). The OCR application could not afford a dedicated computer and needed a dedicated solution. The need for hardware implementations has also arisen in medical applications of ANN which work with the human body. In (Stieglitz, 2006), many examples of neural vision systems which cannot be used attached to computers can be found. Most of these implementations of ANN use the hyperbolic tangent and this is where building hardware implementations becomes more difficult, since even the simplest solutions are highly resource consuming or take a long time to produce a result. Many solutions have been tested: piecewise linear functions, Taylor Series, Polynomial approximations, Elliot functions and the CORDIC algorithm, to name but a few. The best solutions found in the review of the literature are: (Pinto, 2006) that with a set of five 5th order polynomial approximations with a maximum error of 8x10 -5 (using the sigmoid function) and (Ferreira, 2007) that with a piecewise linear approximation determined by an optimized algorithm which achieved 2.18x10 -5 error in the hyperbolic tangent. This paper searches for the best replacement of the hyperbolic tangent as an activation function: the simplest hardware implementation with the lowest error. This solution is meant to replace the hyperbolic tangent after the usual training process with a Personal Computer has been done. To obtain such a solution, different polynomial approximations were tested: Lagrange, Chebyshev and Least Squares. As will be demonstrated further on, the Chebyshev approximation is the best solution. 2. THEORETICAL DESCRIPTION This section describes the theoretical aspects of the most well known methods used to approximate a function through a polynomial, namely the Lagrange and Chebyshev interpolations, as well as the Least Squares method. 1.1 Lagrange interpolation Consider a function f defined in interval [a, b] R, a set X where there are n different nodes and a set Y where each element is a value of the function used to interpolate for each element of X: { } i x x x x x X ,...., , , , 3 2 1 0 = (1) { } ) ( ),.... , ( ), ( 1 0 i x f x f x f Y = (2)

Transcript of LOW RESOURCES HARDWARE IMPLEMENTATION …cee.uma.pt/morgado/Down/NCA_polinomial_v2.pdfDarío...

Page 1: LOW RESOURCES HARDWARE IMPLEMENTATION …cee.uma.pt/morgado/Down/NCA_polinomial_v2.pdfDarío Baptista, Fernando Morgado-Dias Madeira Interactive Technologies Institute and Centro de

LOW RESOURCES HARDWARE IMPLEMENTATION OF THE HYPERBOLIC TANGENT FOR ARTIFICIAL NEURAL NETWORKS

Darío Baptista, Fernando Morgado-Dias

Madeira Interactive Technologies Institute and Centro de Competências de

Ciências Exactas e da Engenharia, Universidade da Madeira Campus da Penteada, 9000-039 Funchal, Madeira, Portugal.

Tel:+351 291-705150/1, Fax: +351 291-705199

Abstract: Artificial Neural Networks are a widespread tool with application in a variety of areas ranging from the social sciences to engineering. Many of these applications have reached a hardware implementation phase and have been documented in scientific papers. Unfortunately, most of the implementations have a simplified hyperbolic tangent replacement which has been the most common problem, as well as the most resource consuming block in terms of hardware. This paper proposes a low resources hardware implementation of the hyperbolic tangent, by using the simplest solution in order to obtain the lowest error possible thus far with a set of 25 polynomials of third order, obtained with Chebyshev interpolations. The results obtained show that the solution proposed holds a low error while simultaneously promising the use of low resources, as only third order polynomials are used.

Keywords: Artificial Neural Networks, Function Approximation, Hyperbolic Tangent, Polynomial Approximation.

1. INTRODUCTION

Artificial Neural Networks (ANN) have been actively studied since the 1980s. During these decades, a necessity of hardware implementation of the trained ANN arose in some of these applications, substituting the traditional implementation in a computer. One of the first hardware implementations was developed by Nestor and Intel in 1993 for an Optical Character Recognition (OCR), which uses Radial Basis Functions (Dias, 2004). The OCR application could not afford a dedicated computer and needed a dedicated solution. The need for hardware implementations has also arisen in medical applications of ANN which work with the human body. In (Stieglitz, 2006), many examples of neural vision systems which cannot be used attached to computers can be found. Most of these implementations of ANN use the hyperbolic tangent and this is where building hardware implementations becomes more difficult, since even the simplest solutions are highly resource consuming or take a long time to produce a result. Many solutions have been tested: piecewise linear functions, Taylor Series, Polynomial approximations, Elliot functions and the CORDIC algorithm, to name but a few. The best solutions found in the review of the literature are: (Pinto, 2006) that with a set of five 5th order polynomial approximations with a maximum error of 8x10-5 (using the sigmoid function) and (Ferreira, 2007) that with a piecewise linear approximation determined by an optimized algorithm which achieved 2.18x10-5 error in the hyperbolic tangent. This paper searches for the best replacement of the hyperbolic tangent as an activation function: the simplest hardware implementation with the lowest error. This solution is meant to replace the hyperbolic tangent after the usual training process with a Personal Computer has been done. To obtain such a solution, different polynomial approximations were tested: Lagrange, Chebyshev and Least Squares. As will be demonstrated further on, the Chebyshev approximation is the best solution.

2. THEORETICAL DESCRIPTION This section describes the theoretical aspects of the most well known methods used to approximate a function through a polynomial, namely the Lagrange and Chebyshev interpolations, as well as the Least Squares method. 1.1 Lagrange interpolation Consider a function f defined in interval [a, b] ⊆R, a set X where there are n different nodes and a set Y where each element is a value of the function used to interpolate for each element of X:

{ }ixxxxxX ,....,,,, 3210= (1) { })(),....,(),( 10 ixfxfxfY = (2)

Page 2: LOW RESOURCES HARDWARE IMPLEMENTATION …cee.uma.pt/morgado/Down/NCA_polinomial_v2.pdfDarío Baptista, Fernando Morgado-Dias Madeira Interactive Technologies Institute and Centro de

Also consider P(x), a polynomial whose degree is equal to n which interpolates the values of Y on each element of X (Rio, 2002):

∑=

=n

kiiki xfxLxP

0

)()()( (3)

Where, Lk is the Lagrange polynomial. It is desired that, for each i, the condition P(xi)=f(xi) be satisfied. The easiest way to impose this condition is by defining the polynomial of Lagrange such that:

⎩⎨⎧

==

ikifikif

xL ik 01

)( (4)

Therefore, the Lagrange polynomial on the points of the set X can be built using the following equation (Rio, 2002):

∏≠= −

−=

n

kii ik

ik xx

xxL0

(5)

In some cases, this method will not prove to be the most convenient representation of the polynomial interpolation. On the one hand, it is possible to obtain this polynomial with fewer arithmetic operations. On the other hand, the Lagrange polynomial (equation 5) depends on a certain set of points, and with a change of position or in the number of these points, the Lagrange polynomial is changed completely (Rio, 2002). 1.2 Least Square Method Consider a function f defined into an interval [a, b]⊆R, and a set Y, where there are m equidistant nodes:

{ }mxxxX ,....,, 21= (6) { })(),....,(),( 21 mxfxfxfY = (7)

The objective of the Least Squares method is to find a polynomial function P(x) of degree equal to n. Using the Least Squares method it is possible, for n < m, to find a polynomial function to adjust the data in order to minimize the total quadratic error (Valença, 1993),

( )∑=

−=m

iii xPxfS

1

2)()( (8)

The Least Squares method minimizes the sum of squared residuals (it is also called the Sum of Squared Errors, SSE) (Valença, 1993). For the SSE to be null it is necessary to verify the following condition:

⎪⎪

⎪⎪

=+++

=++

=++

⎪⎪⎩

⎪⎪⎨

=

=

=

)(

)(

)(

)()(

)()()()(

10

22210

11110

22

11

mn

mnm

nn

nn

mm

xfxcxcc

xfxcxcc

xfxcxcc

xfxP

xfxPxfxP

!

"

!

!

"

(9)

Minimizing the SSE is not a simple task, since when m>n (a system with more equations than unknown variables), the system is over-determined. In order to get a solution for equation 9, an auxiliary matrix, known as the Vandermonde Matrix, can be represented. This Matrix has m vectors of linearly independent columns. In some cases, this kind of system cannot be solved, because y is not a linear combination of the values of the matrix V’s column (Valença, 1993). Thus, Equation 9 can be represented by the following matrix equation (Valença, 1993):

Page 3: LOW RESOURCES HARDWARE IMPLEMENTATION …cee.uma.pt/morgado/Down/NCA_polinomial_v2.pdfDarío Baptista, Fernando Morgado-Dias Madeira Interactive Technologies Institute and Centro de

yVc

xfxf

xfxfxf

c

cccc

xxx

xxx

xxx

xxx

xxx

m

m

nnmmm

nmmm

n

n

n

=⇔

⎥⎥⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢⎢⎢

=

⎥⎥⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢⎢⎢

⎥⎥⎥⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢⎢⎢⎢

−−−−

)()(

)()()(

1

1

1

1

1

1

3

2

1

3

2

1

0

21

211

3233

2222

1211

!

!

"

"

!#!!!

"

"

"

(10)

Figure 1 illustrates the geometric description of Least Square’s method. The parallelogram represents the subspace span(V), generated by matrix V, to which the vector y usually does not belong (Valença, 1993).

qVc

y r=y-Vc

Span(V)

Fig. 1: Geometric description of Least Square’s problem (Valença, 1993).

Considering a matrix V⊆ Rmxn (where m>n) and a vector f(x), one now looks for a vector c ∈ R, such that the residual value, r, is the smallest possible. Figure 1 shows that the length of the residual is minimal when r ┴ span(V) (Valença, 1993) ,i.e.:

( )yVVcV

VcyVrV

TT

T

T

=⇔

=−⇔

=

00

(11)

Thus, to calculate the polynomial coefficients of P(x) using the Least Square’s method, it is necessary to solve the system of equation 9 due to the fact that it is a possible and determined system. The system solution is an approximation of the original system, Vc≈y, which minimizes the residual value. However, system 9 has a disadvantage, because when the degree of polynomial is increased, the columns of the matrix V approach the linear dependence situation (Valença, 1993). 1.3 Chebyshev interpolation A widely used class of orthogonal polynomials is the Chebyshev polynomial (Gil, 2007). Chebyshev polynomials, Tn(x), are generated by the following three-term recurrence relation (Gil, 2007) (Mason, 2003).

)()(2)()(1)(

21

1

0

xTxxTxTxxT

xT

nnn −− −=

=

= (12)

Note that Tn(x) is a polynomial in x of degree n only if n is an integer and positive. Figure 2 shows the graphical representation of Chebyshev polynomials up to degree 5. These polynomials form a sequence of orthogonal polynomials in the interval of [-1; 1].

Page 4: LOW RESOURCES HARDWARE IMPLEMENTATION …cee.uma.pt/morgado/Down/NCA_polinomial_v2.pdfDarío Baptista, Fernando Morgado-Dias Madeira Interactive Technologies Institute and Centro de

Fig. 2: Graphical representation of Chebyshev polynomials up to degree 5

Any arbitrary polynomial of degree n can be written through of a sum of Tn(x). The polynomial, p(x), can be calculated according to equation (Gil, 2007)(Mason, 2003)

⎥⎦

⎤⎢⎣

⎡≈ ∑

=

n

kkk xTcxp

0)()( (13)

where ck is the coefficient of order n, which in turn can be calculated through the following equation (Mason , 2003):

∑−

=

=1

0)()(2 n

kkkkk xTxf

nc (14)

Ideally n should equal infinity; however, this is not possible. For fixed n, equation 13 is a polynomial in x which approximates the function f(x) in the interval [-1, 1]. It is important to mention that, the higher the value of n, the better the approximation which will be achieved (Gil, 2007) (Mason, 2003). A Chebyshev polynomial of degree n has n roots in the interval [-1, 1] (Gil, 2007)(Mason, 2003). These values are called Chebyshev nodes and they can be obtained by the following equation (Gil, 2007):

1..,.,3,2,1,212cos +=⎟

⎞⎜⎝

⎛ −= ni

nixi π (15)

The distribution of theses nodes, on the x-axis, is not equidistant. The following scheme shows a sketch of the nodes’ distribution (Mason, 2003).

i=5-1

i=10

i=n

-1

-1

1

1

1 Fig. 3: Scheme of distribution of 5, 10 and n Chebyshev nodes

The Chebyshev nodes are , however, equidistant on the semi-circumferences of unitary radius and, consequently, these nodes are found accumulated near the extremes of the x-axis, as shown in Figure 4 (Quarteroni and Saleri, 2007)

Page 5: LOW RESOURCES HARDWARE IMPLEMENTATION …cee.uma.pt/morgado/Down/NCA_polinomial_v2.pdfDarío Baptista, Fernando Morgado-Dias Madeira Interactive Technologies Institute and Centro de

Fig. 4: Graphical representation of the Chebyshev nodes on the semi-circumferences unitary. (a) n=5; (b) n=13

It is important to remember that equation 11 calculates the Chebyshev nodes in the interval [-1;1]. If one wishes to calculate n nodes in any interval (ex.: [a, b]), it is necessary to use the following equation (Mason, 2003):

( ) ( ) 1..,.,3,2,1,21

2212cos

21

+=++⎟⎠

⎞⎜⎝

⎛+

−−= niba

nibaxi π (16)

The values found with equation 12 need to be translated to the interval [-1; 1], because in this interval the Chebyshev polynomials are orthogonal (Mason, 2003). Ultimately, a change of scale in the x-axis is necessary. Consider a function f, defined in the interval [a, b] ⊆R, a set X with n Chebyshev nodes, where:

{ }nn xxxxX ,,....,, 121 −= (17)

a b

x1 x2 x3 x4 xnxn-1xn-2xn-3 Fig. 5: Scheme of distribution of Chebyshev nodes in the values of set X

and consider a set Y, where each element is the value of the function used to interpolate for each value of an element of X.

{ })(),(),....,(),( 110 nn xfxfxfxfY −= (18)

It is necessary, however, that these Chebyshev nodes be within the interval [-1, 1], because this is where the Chebyshev polynomials are orthogonal (Gil, 2007) (Mason, 2003). Consequently, one must ensure that the length of the interval [a, b] be the same in the interval [-1, 1], i.e., the length between a and b must be 2.

( ) Rkab

kabk ∈−

=⇔=− ,22 (19)

The extremes of the interval [a, b] must therefore be multiplied by a factor k (equation 19) so that the length between them is equal to 2. Additionally, it is also necessary to multiply each of the values of set X to correspond to the new interval.

ka kb

kx1 kx2 kx3 kx4 kxnkxn-1kxn-2kxn-3 Fig. 6: Scheme of scaling of set X’s nodes

-1 0 10

1

-1 0 10

1

(a)

(b)

Page 6: LOW RESOURCES HARDWARE IMPLEMENTATION …cee.uma.pt/morgado/Down/NCA_polinomial_v2.pdfDarío Baptista, Fernando Morgado-Dias Madeira Interactive Technologies Institute and Centro de

Now, it is necessary that the values of the set X be moved into positions between -1 and 1. Hence, simply subtract each set X’s values by (ka+1).

ka-(ka+1)=-1 kb-(ka+1)=1

kx1-(ka+1)kx2-(ka+1)

kx3-(ka+1) kxn-2-(ka+1)kxn-1-(ka+1)

kxn-(ka+1)

Fig. 7: Scheme of scaling and translation of set X’s nodes

Put briefly, in order to perform scaling and translation of Chebyshev nodes belonging to interval [a, b], for interval [-1, 1], set X should be written as follows:

{ }nn xxxxX ΔΔΔΔ=Δ − ,,....,, 110 (20)

Where

Rkakkxx nn ∈+−=Δ ),1( (21)

3. IMPLEMENTATIONS AND RESULTS

This paper is focused on determining the quality of the approximation of the hyperbolic tangent obtained by the proposed solutions. These solutions, previously described in section 2, were implemented in MatLab to replace the hyperbolic tangent by a polynomial in order to analyse the difference to the real function. Figure 8 shows the flowchart which describes the algorithm.

Insert data

Inter?

Start

Construction of Vandermonde matrix (V)

Dimension: (n+1)X(amos)

Lagrange Least Square MethodChebyshev

Inter: Choice of interpolating approachfname: Function to interpolatea and b: Interval [a;b]n: Degree of the polynomial m: Amount of sampling of the function to interpolate *

Calculate the coefficient

end

Calculate Chebyshev nodes in interval [a,b]

Calculate the values of the function, to interpolate, corresponding to

Chebyshev nodes of the interval [a, b]

Scaling and translation of Chebyshev nodes. ([a, b] à [-1, 1])

Calculate Chebyshev polynomial in interval [-1, 1]

Replaces the variable x in the polynomial by kxn-(ak+1)

Calculate Chebyshev polynomial

Where

Construction of transposed Vandermonde matrix (VT)

Calculate the coefficient of polynomial

((V'*y)/(V'*V))

Calculate the polynomial in interval [a, b]

Calculate the m values of the function, to interpolate,

corresponding to interval [a, b]

Calculate Lagrange polynomial

∏≠= −

−=

n

kii ik

ik xx

xxL0

Calculate the values of function, to interpolate, corresponding to the

nodes of the interval [a, b]

Calculate the polynomial interpolator in interval

[a, b]

∑−

=

=1

0)()(

n

kkk xfLxP

Calculate equidistant nodes in interval [a,b]

)()(2)( 21 xTxxTxT nnn −− −=

xxTxT=

=

)(1)(

1

0

* this parameter is only aplicable to Least Square Method* this parameter is only aplicable to Least Square Method

Fig. 7: Flowchart that describes the different approaches

Page 7: LOW RESOURCES HARDWARE IMPLEMENTATION …cee.uma.pt/morgado/Down/NCA_polinomial_v2.pdfDarío Baptista, Fernando Morgado-Dias Madeira Interactive Technologies Institute and Centro de

Before calculating the polynomials, it was necessary to analyze the parity of the hyperbolic tangent. Figure 8 shows that this function presents symmetry at the origin, i.e., it is an odd function, as shown in equation 22.

)()( xfxf −=− (22)

Fig. 8: Graphical representation of the hyperbolic tangent

It was possible to reduce the part of the hyperbolic tangent which was effectively implemented, and based on this part, implement the rest of the function. Thus, in this function, the domain was divided (only for x>0) into segments and each segment approximated by a polynomial section. For the present implementation, the domain between 0<x<6 was divided to obtain, in each sub-interval, a polynomial with a Mean Square Error (MSE) equal to 1x10-14 when compared to the original function. For x>6, the function was interpolated by a constant value, whereas the remaining intervals used third order polynomials. The number and the division of the intervals are dependent on the approximation. Once the MSE reaches the predefined value of 1x10-14 ,a new interval must be inserted. As a result, the number of intervals and maximum errors will be different for each approximation despite the maximum MSE being the same. Figures 10, 11 and 12 show the error and the division of hyperbolic tangent for different kinds of interpolations.

Fig. 10. Error between the real hyperbolic tangent and Lagrange polynomial approximation

0 1 2 3 4 5 6-2

-1

0

1

2 x 10-7

x

Qua

drat

ic Er

ror

0 1 2 3 4 5 60

0.2

0.4

0.6

0.8

1

x

Hype

rbol

ic Ta

ngen

t

Page 8: LOW RESOURCES HARDWARE IMPLEMENTATION …cee.uma.pt/morgado/Down/NCA_polinomial_v2.pdfDarío Baptista, Fernando Morgado-Dias Madeira Interactive Technologies Institute and Centro de

Fig. 11. Error between the real hyperbolic tangent and Chebyshev polynomial approximation

Fig. 12. Error between the real hyperbolic tangent and the Least Square Method polynomial approximation

Table 1 shows the different maximum and minimum errors for each approximation of the hyperbolic tangent.

Table 1 Maximum positive and negative errors and number of intervals for each solution type of polynomial

approximation. Type of polynomial

approximation Maximum Positive error Maximum negative error Number of intervals

Lagrange interpolation 1,832x10-7 -1,935x10-7 27 Least Square Method 1,664x10-7 -1.955x10-7 25

Chebyshev interpolation 1,614x10-7 -1,929x10-7 25

CONCLUSION

In this paper a high resolution with low resources alternative to the hyperbolic tangent is presented. It is based on polynomial approximations of 3rd order, selected to avoid the use of many multiplications. This solution shows a maximum error of 1,929x10-7 ,if the Chebyshev interpolation is selected, and a MSE of 1x10-14. From the results presented, it is possible to conclude that the Chebyshev approximation is the best solution to represent the hyperbolic tangent by a third-degree polynomial. By analysing figures 6-8, it can be seen that the Lagrange polynomial approximation describes the hyperbolic tangent with 27 third order polynomials. However,

0 1 2 3 4 5 6-2

-1

0

1

2 x 10-7

x

Qua

drat

ic E

rror

0 1 2 3 4 5 60

0.2

0.4

0.6

0.8

1

x

Hyp

erbo

lic T

ange

nt

0 1 2 3 4 5 60

0.2

0.4

0.6

0.8

1

x

Hyp

erbo

lic T

ange

nt

0 1 2 3 4 5 6-2

-1

0

1

2 x 10-7

x

Qua

drat

ic E

rror

Page 9: LOW RESOURCES HARDWARE IMPLEMENTATION …cee.uma.pt/morgado/Down/NCA_polinomial_v2.pdfDarío Baptista, Fernando Morgado-Dias Madeira Interactive Technologies Institute and Centro de

if the Chebyshev approximation and Least Square Method are used, the function could be represented with only 26 third-degree polynomials, while the Lagrange approximation needs 27 polynomials. However, if the results of the maximum and minimum error in table 1 are analysed, one can conclude that the Chebyshev approximation is the best solution. These results can be explained due to the way in which the nodes are distributed. In the Chebyshev approximation, the nodes are more concentrated in the extremity of each sub-interval, while the m nodes used in each sub interval in the method of Least Squares are equidistantly separated .

ACKNOWLEDGMENTS The authors would like to acknowledge the Portuguese Foundation for Science and Technology for their support in this work through the project PEst-OE/EEI/LA0009/2011.

REFERENCES Stieglitz, T. and Meyer, J. (2006). Biomedical Microdevices for Neural Implants, BIOMEMS Microsystems.

Vol. 16, pp. 71-137. Dias F. M., Antunes A. and Mota A. (2004). Artificial neural networks: a review of commercial hardware, in

Engineering Applications of Artificial Intelligence, Vol.17, pp. 945-952. Pinto J. O. P., Bose B.K., Leite L. C., Silva L. E. B. and Romero M. E.(2006). Field Programmable Gate Array

(FPGA) Based Neural Network Implementation of Stator Flux Oriented Vector Control of Induction Motor Drive, in IEEE International Conference on Industrial Technology.

Ferreira, P., Ribeiro, P., Antunes, A. and Morgado Dias, F. (2007). A high bit resolution FPGA implementation

of a FNN with a new algorithm for the activation function, in Neurocomputing, Vol.71, pp 71-77. Rio J. A. I. and Cabezas J. M. R. (2002). Métodos Numéricos – Teoria, problemas y práticas com Matlab, 2nd

edn. Pirâmide. Valença M. R.. (1993). Métodos Numéricos, 3th edn. Ferreira e Salgado, Press LLC. Gil A., Segura J. and Temme N. M. (2007). Numerical Methods for Special Functions”, in SIAM - Society for

Industrial and Applied Mathematics. Mason J. C. and Handscomb D. C. (2003). Chebyshev Polynomials, 2nd edn. Chapman and Hall-CRC, Press

LLC. Quarteroni A. and Saleri F. (2007) Cálculo Científico com Matlab e Octave, Springer. Peixoto L. L. (2008). Quadratura de Gauss iterative com base nos polinómios ortogonais clássicos, in Programa

de Pós-Graduação em Modelagem Matemática e Computacional, pp 41-49.

Page 10: LOW RESOURCES HARDWARE IMPLEMENTATION …cee.uma.pt/morgado/Down/NCA_polinomial_v2.pdfDarío Baptista, Fernando Morgado-Dias Madeira Interactive Technologies Institute and Centro de