Curve Fitting Tutorial

13
1/13 www.ni.com Overview of Curve Fitting Models and Methods in LabVIEW 1. 2. 3. Overview As the usage of digital measurement instruments during the test and measurement process increases, acquiring large quantities of data becomes easier. However, the methods of processing and extracting useful information from the acquired data become a challenge. During the test and measurement process, you often see a mathematical relationship between observed values and independent variables, such as the relationship between temperature measurements, an observable value, and measurement error, an independent variable that results from an inaccurate measuring device. One way to find the mathematical relationship is curve fitting, which defines an appropriate curve to fit the observed values and uses a curve function to analyze the relationship between the variables. You can use curve fitting to perform the following tasks: Reduce noise and smooth data Find the mathematical relationship or function among variables and use that function to perform further data processing, such as error compensation, velocity and acceleration calculation, and so on Estimate the variable value between data samples Estimate the variable value outside the data sample range This document describes the different curve fitting models, methods, and the LabVIEW VIs you can use to perform curve fitting. Table of Contents Curve Fitting in LabVIEW Application Examples Summary Curve Fitting in LabVIEW What is Curve Fitting? The purpose of curve fitting is to find a function ( ) in a function class Φ for the data ( , ) where =0, 1, 2,…, –1. The function ( ) minimizes the residual under the weight . The residual is the fx x i y i i n fx W distance between the data samples and ( ). A smaller residual means a better fit. In geometry, curve fitting is a curve =( ) that fits the data ( , ) where =0, 1, 2,…, –1. fx yfx x i y i i n In LabVIEW, you can use the following VIs to calculate the curve fitting function. Linear Fit VI Exponential Fit VI Power Fit VI Gaussian Peak Fit VI Logarithm Fit VI These VIs create different types of curve fitting models for the data set. Refer to the for information about using these VIs. The following graphs show the different types of fitting LabVIEW Help models you can create with LabVIEW. : Document Type Tutorial : Yes NI Supported : Jul 31, 2009 Publish Date

Transcript of Curve Fitting Tutorial

Page 1: Curve Fitting Tutorial

1/13 www.ni.com

Overview of Curve Fitting Models and Methods in LabVIEW

1. 2. 3.

Overview

As the usage of digital measurement instruments during the test and measurement process increases, acquiring large quantities of data becomes easier. However, the methods of processing andextracting useful information from the acquired data become a challenge.

During the test and measurement process, you often see a mathematical relationship between observed values and independent variables, such as the relationship between temperaturemeasurements, an observable value, and measurement error, an independent variable that results from an inaccurate measuring device. One way to find the mathematical relationship is curvefitting, which defines an appropriate curve to fit the observed values and uses a curve function to analyze the relationship between the variables.

You can use curve fitting to perform the following tasks:

Reduce noise and smooth data

Find the mathematical relationship or function among variables and use that function to perform further data processing, such as error compensation, velocity and acceleration calculation, and soon

Estimate the variable value between data samples

Estimate the variable value outside the data sample range

This document describes the different curve fitting models, methods, and the LabVIEW VIs you can use to perform curve fitting.

Table of Contents

Curve Fitting in LabVIEWApplication ExamplesSummary

Curve Fitting in LabVIEW

What is Curve Fitting?

The purpose of curve fitting is to find a function ( ) in a function class Φ for the data ( , ) where =0, 1, 2,…, –1. The function ( ) minimizes the residual under the weight . The residual is thef x xi yi i n f x Wdistance between the data samples and ( ). A smaller residual means a better fit. In geometry, curve fitting is a curve = ( ) that fits the data ( , ) where =0, 1, 2,…, –1.f x y f x xi yi i n

In LabVIEW, you can use the following VIs to calculate the curve fitting function.

Linear Fit VI

Exponential Fit VI

Power Fit VI

Gaussian Peak Fit VI

Logarithm Fit VI

These VIs create different types of curve fitting models for the data set. Refer to the for information about using these VIs. The following graphs show the different types of fittingLabVIEW Helpmodels you can create with LabVIEW.

: Document Type Tutorial: Yes NI Supported: Jul 31, 2009 Publish Date

Page 2: Curve Fitting Tutorial

2/13 www.ni.com

Figure 1. Curve Fitting Models in LabVIEW

Before fitting the data set, you must decide which fitting model to use. An improper choice, for example, using a linear model to fit logarithmic data, leads to an incorrect fitting result or a result thatinaccurately determines the characteristics of the data set. Therefore, you first must choose an appropriate fitting model based on the data distribution shape, and then judge if the model is suitableaccording to the result.

Every fitting model VI in LabVIEW has a input. The input default is 1, which means all data samples have the same influence on the fitting result. In some cases, outliers exist in theWeight Weightdata set due to external factors such as noise. If you calculate the outliers at the same weight as the data samples, you risk a negative effect on the fitting result. Therefore, you can adjust theweight of the outliers, even set the weight to 0, to eliminate the negative influence.

You also can use the Curve Fitting Express VI in LabVIEW to develop a curve fitting application.

Curve Fitting Methods

Different fitting methods can evaluate the input data to find the curve fitting model parameters. Each method has its own criteria for evaluating the fitting residual in finding the fitted curve. Byunderstanding the criteria for each method, you can choose the most appropriate method to apply to the data set and fit the curve. In LabVIEW, you can apply the Least Square (LS), LeastAbsolute Residual (LAR), or Bisquare fitting method to the Linear Fit, Exponential Fit, Power Fit, Gaussian Peak Fit, or Logarithm Fit VI to find the function ( ).f x

The LS method finds ( ) by minimizing the residual according to the following formula:f x

where is the number of data samplesn

wi is the element of the array of weights for the data samplesith

f( ) is the element of the array of y-values of the fitted modelxi ith

yi is the element of the data set ( , )ith xi yi

The LAR method finds ( ) by minimizing the residual according to the following formula:f x

The Bisquare method finds ( ) by using an iterative process, as shown in the following flowchart, and calculates the residual by using the same formula as in the LS method. The Bisquare methodf xcalculates the data starting from iteration .k

Page 3: Curve Fitting Tutorial

3/13 www.ni.com

Figure 2. Bisquare Method Flowchart

Because the LS, LAR, and Bisquare methods calculate ( ) differently, you want to choose the curve fitting method depending on the data set. For example, the LAR and Bisquare fitting methodsf xare robust fitting methods. Use these methods if outliers exist in the data set. The following sections describe the LS, LAR, and Bisquare calculation methods in detail.

LS Method

The least square method begins with a linear equations solution.

Ax = b

A is a matrix and and are vectors. – represents the error of the equations.x b Ax b

The following equation represents the square of the error of the previous equation.

E( ) = ( - ) ( - ) = -2 +x Ax b T Ax b xTATAx xbTA bTb

To minimize the square error ( ), calculate the derivative of the previous function and set the result to zero:E x

E’( ) = 0x

2 -2 = 0ATAx ATb

ATAx =ATb

x = ( )ATA -1ATb

From the algorithm flow, you can see the efficiency of the calculation process, because the process is not iterative. Applications demanding efficiency can use this calculation process.

The LS method calculates by minimizing the square error and processing data that has Gaussian-distributed noise. If the noise is not Gaussian-distributed, for example, if the data containsxoutliers, the LS method is not suitable. You can use another method, such as the LAR or Bisquare method, to process data containing non-Gaussian-distributed noise.

LAR Method

The LAR method minimizes the residual according to the following formula:

From the formula, you can see that the LAR method is an LS method with changing weights. If the data sample is far from ( ), the weight is set relatively lower after each iteration so that this dataf xsample has less negative influence on the fitting result. Therefore, the LAR method is suitable for data with outliers.

Bisquare Method

Like the LAR method, the Bisquare method also uses iteration to modify the weights of data samples. In most cases, the Bisquare method is less sensitive to outliers than the LAR method.

Comparing the Curve Fitting Methods

If you compare the three curve fitting methods, the LAR and Bisquare methods decrease the influence of outliers by adjusting the weight of each data sample using an iterative process.Unfortunately, adjusting the weight of each data sample also decreases the efficiency of the LAR and Bisquare methods.

To better compare the three methods, examine the following experiment. Use the three methods to fit the same data set: a linear model containing 50 data samples with noise. The following tableshows the computation times for each method:

Table 1. Processing Times for Three Fitting Methods

Page 4: Curve Fitting Tutorial

4/13 www.ni.com

Fitting method LS LAR BisquareTimeμs 3.5 30 60

As you can see from the previous table, the LS method has the highest efficiency.

The following figure shows the influence of outliers on the three methods:

Figure 3. Comparison among Three Fitting Methods

The data samples far from the fitted curves are outliers. In the previous figure, you can regard the data samples at (2, 17), (20, 29), and (21, 31) as outliers. The results indicate the outliers have agreater influence on the LS method than on the LAR and Bisquare methods.

From the previous experiment, you can see that when choosing an appropriate fitting method, you must take both data quality and calculation efficiency into consideration.

LabVIEW Curve Fitting Models

In addition to the Linear Fit, Exponential Fit, Gaussian Peak Fit, Logarithm Fit, and Power Fit VIs, you also can use the following VIs to calculate the curve fitting function.

General Polynomial VI

General Linear Fit VI

Cubic Spline Fit VI

Nonlinear Curve Fit VI

General Polynomial Fit

The General Polynomial Fit VI fits the data set to a polynomial function of the general form:

f(x) = + + + …a bx cx2

The following figure shows a General Polynomial curve fit using a third order polynomial to find the real zeroes of a data set. You can see that the zeroes occur at approximately (0.3, 0), (1, 0), and(1.5, 0).

Figure 4. General Polynomial Model

This VI calculates the mean square error (MSE) using the following equation:

When you use the General Polynomial Fit VI, you first need to set the input. A high does not guarantee a better fitting result and can cause oscillation. A tenthPolynomial Order Polynomial Orderorder polynomial or lower can satisfy most applications. The default is 2.Polynomial Order

This VI has a input. You can set this input if you know the exact values of the polynomial coefficients. By setting this input, the VI calculates a result closer to the true value.Coefficient Constraint

General Linear Fit

Page 5: Curve Fitting Tutorial

5/13 www.ni.com

The General Linear Fit VI fits the data set according to the following equation:

y = + ( ) + ( ) + …+a- ( )a0 a1f1 x a2f2 x k-1fk-1 x

where is a linear combination of the coefficients , , , …, and is the number of coefficients.y a0 a1 a2 ak-1 k

The following equations show you how to extend the concept of a linear combination of coefficients so that the multiplier for is some function of .a1 x

y = + sin(ω )a0 a1 x

y = + a0 a1x2

y = + cos(ω )a0 a1 x2

where ω is the angular frequency.

In each of the previous equations, is a linear combination of the coefficients and . For the General Linear Fit VI, also can be a linear combination of several coefficients. Each coefficienty a0 a1 y

has a multiplier of some function of . Therefore, you can use the General Linear Fit VI to calculate and represent the coefficients of the functional models as linear combinations of the coefficients.x

y = + sin(ω )a0 a1 x

y = + + cos(ω )a0 a1x2 a2 x2

y = + (3sin(ω )) + + ( / ) + …a0 a1 x a2x3 a3 x

In each of the previous equations, can be both a linear function of the coefficients , , ,…, and a nonlinear function of y a0 a1 a2 x.

Building the Observation Matrix

When you use the General Linear Fit VI, you must build the observation matrix . For example, the following equation defines a model using data from a transducer.H

y = + sin(ω ) + cos(ω ) + a0 a1 x a2 x a3x2

The following table shows the multipliers for the coefficients, , in the previous equation.aj

Coefficient Multiplier

ao 1

a1 sin(ωx)

a2 cos(ωx)

a3 x2

To build the observation matrix , each column value in equals the independent function, or multiplier, evaluated at each value, . The following equation defines the observation matrix for aH H x xi Hdata set containing 100 values using the previous equation.x

If the data set contains data points and coefficients for the coefficient , , …, , then is an × observation matrix. Therefore, the number of rows in equals the number of datan k a0 a1 ak– 1 H n k Hpoints, . The number of columns in equals the number of coefficients, .n H k

To obtain the coefficients, , , …, , the General Linear Fit VI solves the following linear equation:a0 a1 ak – 1

H a = y

where = [ … ] and = [ … ] .a a0 a1 ak – 1T y y0 y1 yn – 1

T

Cubic Spline Fit

A spline is a piecewise polynomial function for interpolating and smoothing. In curve fitting, splines approximate complex shapes.

The Cubic Spline Fit VI fits the data set ( , ) by minimizing the following function:xi yi

Page 6: Curve Fitting Tutorial

6/13 www.ni.com

where p is the balance parameter

wi is the element of the array of ith weights for the data set

yi is the element of the data set ( , )ith xi yi

xi is the element of the data set ( , )ith xi yi

f"( ) is the second order derivative of the cubic spline function, ( )x f x

λ( ) is the piecewise constant function:x

where λ is the element of the .i ith Smoothness input of the VI

If the input is 0, the cubic spline model is equivalent to a linear model. If the input is 1, the fitting method is equivalent to cubic spline interpolation. Balance Parameter p Balance Parameter p pmust fall in the range [0, 1] to make the fitted curve both close to the observations and smooth. The closer is to 0, the smoother the fitted curve. The closer is to 1, the closer the fitted curve is top pthe observations. The following figure shows the fitting results when takes different values.p

Figure 5. Cubic Spline Model

You can see from the previous figure that when equals 1.0, the fitted curve is closest to the observation data. When equals 0.0, the fitted curve is the smoothest, but the curve does not interceptp pat any data points.

Nonlinear Curve Fit

The Nonlinear Curve Fit VI fits data to the curve using the nonlinear Levenberg-Marquardt method according to the following equation:

y = ( ; , , , …, )f x a0 a1 a2 ak

where , , , …, are the coefficients and is the number of coefficients.a0 a1 a2 ak k

The nonlinear Levenberg-Marquardt method is the most general curve fitting method and does not require to have a linear relationship with , , , …, . You can use the nonlineary a0 a1 a2 akLevenberg-Marquardt method to fit linear or nonlinear curves. However, the most common application of the method is to fit a nonlinear curve, because the general linear fit method is better forlinear curve fitting.

LabVIEW also provides the Constrained Nonlinear Curve Fit VI to fit a nonlinear curve with constraints. You can set the upper and lower limits of each fitting parameter based on prior knowledgeabout the data set to obtain a better fitting result.

The following figure shows the use of the Nonlinear Curve Fit VI on a data set. The nonlinear nature of the data set is appropriate for applying the Levenberg-Marquardt method.

Page 7: Curve Fitting Tutorial

7/13 www.ni.com

Figure 6. Nonlinear Curve Model

Preprocessing

The Remove Outliers VI preprocesses the data set by removing data points that fall outside of a range. The VI eliminates the influence of outliers on the objective function. The following figureshows a data set before and after the application of the Remove Outliers VI.

Figure 7. Remove Outliers VI

In the previous figure, the graph on the left shows the original data set with the existence of outliers. The graph on the right shows the preprocessed data after removing the outliers.

You also can remove the outliers that fall within the array indices you specify.

Some data sets demand a higher degree of preprocessing. A median filter preprocessing tool is useful for both removing the outliers and smoothing out data.

Postprocessing

LabVIEW offers VIs to evaluate the data results after performing curve fitting. These VIs can determine the accuracy of the curve fitting results and calculate the confidence and prediction intervalsin a series of measurements.

Goodness of Fit

The Goodness of Fit VI evaluates the fitting result and calculates the sum of squares error (SSE), R-square error (R ), and root mean squared error (RMSE) based on the fitting result. These three2

statistical parameters describe how well the fitted model matches the original data set. The following equations describe the SSE and RMSE, respectively.

where is the degree of freedom.DOF

The SSE and RMSE reflect the influence of random factors and show the difference between the data set and the fitted model.

The following equation describes R-square:

where SST is the total sum of squares according to the following equation:

R-square is a quantitative representation of the fitting level. A high R-square means a better fit between the fitting model and the data set. Because R-square is a fractional representation of theSSE and SST, the value must be between 0 and 1.

Page 8: Curve Fitting Tutorial

8/13 www.ni.com

0 ≤ R-square ≤ 1

When the data samples exactly fit on the fitted curve, SSE equals 0 and R-square equals 1. When some of the data samples are outside of the fitted curve, SSE is greater than 0 and R-square isless than 1. Because R-square is normalized, the closer the R-square is to 1, the higher the fitting level and the less smooth the curve.

The following figure shows the fitted curves of a data set with different R-square results.

Figure 8. Fitting Results with Different R-Square Values

You can see from the previous figure that the fitted curve with R-square equal to 0.99 fits the data set more closely but is less smooth than the fitted curve with R-square equal to 0.97.

Confidence Interval and Prediction Interval

In the real-world testing and measurement process, as data samples from each experiment in a series of experiments differ due to measurement error, the fitting results also differ. For example, ifthe measurement error does not correlate and distributes normally among all experiments, you can use the confidence interval to estimate the uncertainty of the fitting parameters. You also canuse the prediction interval to estimate the uncertainty of the dependent values of the data set.

For example, you have the sample set ( , ), ( , ), …, ( , ) for the linear fit function = + . For each data sample, ( , ), the variance of the measurement error, , is specified byx0 y0 x1 y1 xn-1 yn-1 y a0x a1 xi yithe weight,

You can use the function form = ( ) f the LS method to fit the data according to the following equation.x ATA -1 b oAT

where = [ ]a a0 a1T

y = [ ]y0 y1 … yn-1T

You can rewrite the covariance matrix of parameters, and , as the following equation.a0 a1

where is the Jacobean matrixJ

m is the number of parameters

n is the number of data samples

Page 9: Curve Fitting Tutorial

9/13 www.ni.com

In the previous equation, the number of parameters, equals 2. The i diagonal element of , , is the variance of the parameter , .m, th C Cii ai

The confidence interval estimates the uncertainty of the fitting parameters at a certain confidence level . For example, a 95% confidence interval means that the true value of the fitting parameterhas a 95% probability of falling within the confidence interval. The confidence interval of the i fitting parameter is:th

where is the Student’s t inverse cumulative distribution function of – degrees of freedom at probability and is the standard deviation of the parameter andn m ai

equals .

You also can estimate the confidence interval of each data sample at a certain confidence level . For example, a 95% confidence interval of a sample means that the true value of the sample hasa 95% probability of falling within the confidence interval. The confidence interval of the i data sample is:th

where ( ) denotes the i diagonal element of matrix . In the above formula, the matrix ( represents matrix .diagi A th A JCJ)T A

The prediction interval estimates the uncertainty of the data samples in the subsequent measurement experiment at a certain confidence level . For example, a 95% prediction interval means thatthe data sample has a 95% probability of falling within the prediction interval in the next measurement experiment. Because the prediction interval reflects not only the uncertainty of the true value,but also the uncertainty of the next measurement, the prediction interval is wider than the confidence interval. The prediction interval of the i sample is:th

LabVIEW provides VIs to calculate the confidence interval and prediction interval of the common curve fitting models, such as the linear fit, exponential fit, Gaussian peak fit, logarithm fit, andpower fit models. These VIs calculate the upper and lower bounds of the confidence interval or prediction interval according to the confidence level you set.

The following figure shows examples of the graph and the graph, respectively, for the same data set.Confidence Interval Prediction Interval

Figure 9. Confidence Interval and Prediction Interval

From the graph, you can see that the confidence interval is narrow. A small confidence interval indicates a fitted curve that is close to the real curve. From the Confidence Interval Prediction graph, you can conclude that each data sample in the next measurement experiment will have a 95% chance of falling within the prediction interval.Interval

Application Examples

Error Compensation

As measurement and data acquisition instruments increase in age, the measurement errors which affect data precision also increase. In order to ensure accurate measurement results, you can use

Page 10: Curve Fitting Tutorial

10/13 www.ni.com

the curve fitting method to find the error function to compensate for data errors.

For example, examine an experiment in which a thermometer measures the temperature between –50ºC and 90ºC. Suppose is the measured temperature, is the ambient temperature, and T1 T2 T

is the measurement error where is minus . By measuring different temperatures within the measureable range of –50ºC and 90ºC, you obtain the following data table:e Te T1 T2

Table 2. Ambient Temperature and Measured Temperature Readings

Ambient Temperature

MeasuredTemperature

Ambient Temperature MeasuredTemperature

Ambient Temperature MeasuredTemperature

-43.1377 -42.9375 0.769446 0.5625 45.68797 45.5625-39.3466 -39.25 5.831063 5.625 50.56738 50.5-34.2368 -34.125 10.84934 10.625 55.58933 55.5625-29.0969 -29.0625 15.79473 15.5625 60.51409 60.5625-24.1398 -24.125 20.79082 20.5625 65.35461 65.4375-19.2454 -19.3125 25.70361 25.5 70.54241 70.6875-14.0779 -14.1875 30.74484 30.5625 75.40949 75.625-9.10834 -9.25 35.60317 35.4375 80.41012 80.75-4.08784 -4.25 40.57861 40.4375 85.26303 85.6875

You can use the General Polynomial Fit VI to create the following block diagram to find the compensated measurement error.

Figure 10. Block Diagram of an Error Function VI Using the General Polynomial Fit VI

The following front panel displays the results of the experiment using the VI in Figure 10.

Figure 11. Using the General Polynomial Fit VI to Fit the Error Curve

The previous figure shows the original measurement error data set, the fitted curve to the data set, and the compensated measurement error. After first defining the fitted curve to the data set, theVI uses the fitted curve of the measurement error data to compensate the original measurement error.

You can see from the graph of the compensated error that using curve fitting improves the results of the measurement instrument by decreasing the measurement error to about one tenth of theoriginal error value.

Removing Baseline Wandering

During signal acquisition, a signal sometimes mixes with low frequency noise, which results in baseline wandering. Baseline wandering influences signal quality, therefore affecting subsequentprocesses. To remove baseline wandering, you can use curve fitting to obtain and extract the signal trend from the original signal.

As shown in the following figures, you can find baseline wandering in an ECG signal that measures human respiration. You can obtain the signal trend using the General Polynomial Fit VI and thendetrend the signal by finding and removing the baseline wandering from the original signal. The remaining signal is the subtracted signal.

Page 11: Curve Fitting Tutorial

11/13 www.ni.com

Figure 12. Using the General Polynomial Fit VI to Remove Baseline Wandering

You can see from the previous graphs that using the General Polynomial Fit VI suppresses baseline wandering. In this example, using the curve fitting method to remove baseline wandering isfaster and simpler than using other methods such as wavelet analysis.

Edge Extraction

In digital image processing, you often need to determine the shape of an object and then detect and extract the edge of the shape. This process is called edge extraction. Inferior conditions, suchas poor lighting and overexposure, can result in an edge that is incomplete or blurry. If the edge of an object is a regular curve, then the curve fitting method is useful for processing the initial edge.

To extract the edge of an object, you first can use the watershed algorithm. This algorithm separates the object image from the background image. Then you can use the morphologic algorithm tofill in missing pixels and filter the noise pixels. After obtaining the shape of the object, use the Laplacian, or the Laplace operator, to obtain the initial edge. The following figure shows the edgeextraction process on an image of an elliptical object with a physical obstruction on part of the object.

Figure 13. Edge Extraction Process

As you can see from the previous figure, the extracted edge is not smooth or complete due to lighting conditions and an obstruction by another object. Because the edge shape is elliptical, you canimprove the quality of edge by using the coordinates of the initial edge to fit an ellipse function. Using an iterative process, you can update the weight of the edge pixel in order to minimize theinfluence of inaccurate pixels in the initial edge. The following figure shows the front panel of a VI that extracts the initial edge of the shape of an object and uses the Nonlinear Curve Fit VI to fit theinitial edge to the actual shape of the object.

Page 12: Curve Fitting Tutorial

12/13 www.ni.com

Figure 14. Using the Nonlinear Curve Fit VI to Fit an Elliptical Edge

The graph in the previous figure shows the iteration results for calculating the fitted edge. After several iterations, the VI extracts an edge that is close to the actual shape of the object.

Decomposing Mixed Pixels Using Curve Fitting

The standard of measurement for detecting ground objects in remote sensing images is usually pixel units. Due to spatial resolution limitations, one pixel often covers hundreds of square meters.The pixel is a mixed pixel if it contains ground objects of varying compositions. Mixed pixels are complex and difficult to process. One method of processing mixed pixels is to obtain the exactpercentages of the objects of interest, such as water or plants.

The following image shows a Landsat false color image taken by Landsat 7 ETM+ on July 14, 2000. This image displays an area of Shanghai for experimental data purposes.

Figure 15. False Color Image

In the previous image, you can observe the five bands of the Landsat multispectral image, with band 3 displayed as blue, band 4 as green, and band 5 as red. The image area includes three typesof typical ground objects: water, plant, and soil. Soil objects include artificial architecture such as buildings and bridges.

You can use the General Linear Fit VI to create a mixed pixel decomposition VI. The following figure shows the decomposition results using the General Linear Fit VI.

(a) Plant (b) Soil and Artificial Architecture (c) Water

Figure 16. Using the General Linear Fit VI to Decompose a Mixed Pixel Image

In the previous images, black-colored areas indicate 0% of a certain object of interest, and white-colored areas indicate 100% of a certain object of interest. For example, in the image representingplant objects, white-colored areas indicate the presence of plant objects. In the image representing water objects, the white-colored, wave-shaped region indicates the presence of a river. You cancompare the water representation in the previous figure with Figure 15.

From the results, you can see that the General Linear Fit VI successfully decomposes the Landsat multispectral image into three ground objects.

Exponentially Modified Gaussian Fit

The model you want to fit sometimes contains a function that LabVIEW does not include. For example, the following equation describes an exponentially modified Gaussian function.

where

Page 13: Curve Fitting Tutorial

13/13 www.ni.com

y0 is the offset from the y-axis

A is the amplitude of the data set

xc is the center of the data set

w is the width of the function

t0 is the modification factor

The curve fitting VIs in LabVIEW cannot fit this function directly, because LabVIEW cannot calculate generalized integrals directly. However, the integral in the previous equation is a normalprobability integral, which an error function can represent according to the following equation.

represents the error function in LabVIEW.

You can rewrite the original exponentially modified Gaussian function as the following equation.

LabVIEW can fit this equation using the Nonlinear Curve Fit VI. The following figure shows an exponentially modified Gaussian model for chromatography data.

Figure 17. Exponentially Modified Gaussian Model

This model uses the Nonlinear Curve Fit VI and the Error Function VI to calculate the curve fit for a data set that is best fit with the exponentially modified Gaussian function.

By using the appropriate VIs, you can create a new VI to fit a curve to a data set whose function is not available in LabVIEW.

Summary

Curve fitting not only evaluates the relationship among variables in a data set, but also processes data sets containing noise, irregularities, errors due to inaccurate testing and measurementdevices, and so on. LabVIEW provides basic and advanced curve fitting VIs that use different fitting methods, such as the LS, LAR, and Bisquare methods, to find the fitting curve. The fitting modeland method you use depends on the data set you want to fit. LabVIEW also provides preprocessing and evaluation VIs to remove outliers from a data set, evaluate the accuracy of the fitting result,and measure the confidence interval and prediction interval of the fitted data.

Refer to the for more information about curve fitting and LabVIEW curve fitting VIs.LabVIEW Help

LegalThis tutorial (this "tutorial") was developed by National Instruments ("NI"). Although technical support of this tutorial may be made available by National Instruments, the content in this tutorial maynot be completely tested and verified, and NI does not guarantee its quality in any way or that NI will continue to support this content with each new revision of related products and drivers. THISTUTORIAL IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND AND SUBJECT TO CERTAIN RESTRICTIONS AS MORE SPECIFICALLY SET FORTH IN NI.COM'S TERMS OF USE (

).http://ni.com/legal/termsofuse/unitedstates/us/