Advanced Mechatronics Systems - Laboratory - Mechatronics Control Systems - 1st Ed

186
 Contents Signal processing 1  Analog signal processing 1 Fourier transform 4 Fast Fourier transform 24 Laplace transform 31 Linear system 47 Time-invariant system 49 Dirac delta function 51 Heaviside step function 72 Ramp function 75 Digital signal processing 77 Time domain 82 Z-transform 83 Frequency domain 93 Initial value theorem 95 Final value theorem 95 Sensors 97  Sensor 97 Accelerometer 100 Capacitive sensing 108 Capacitive displacement sensor 111 Current sensor 114 Electro-optical sensor 115 Galvanometer 115 Hall effect sensor 121 Inductive sensor 123 Infrared 124 Linear encoder 137 Photoelectric sensor 142 Photodiode 143 Piezoelectric accelerometer 148 Pressure sensor 150

description

Advanced Mechatronics

Transcript of Advanced Mechatronics Systems - Laboratory - Mechatronics Control Systems - 1st Ed

  • Contents

    Signal processing 1

    Analog signal processing 1

    Fourier transform 4

    Fast Fourier transform 24

    Laplace transform 31

    Linear system 47

    Time-invariant system 49

    Dirac delta function 51

    Heaviside step function 72

    Ramp function 75

    Digital signal processing 77

    Time domain 82

    Z-transform 83

    Frequency domain 93

    Initial value theorem 95

    Final value theorem 95

    Sensors 97

    Sensor 97

    Accelerometer 100

    Capacitive sensing 108

    Capacitive displacement sensor 111

    Current sensor 114

    Electro-optical sensor 115

    Galvanometer 115

    Hall effect sensor 121

    Inductive sensor 123

    Infrared 124

    Linear encoder 137

    Photoelectric sensor 142

    Photodiode 143

    Piezoelectric accelerometer 148

    Pressure sensor 150

  • Resistance thermometer 154

    Thermistor 165

    Torque sensor 170

    Ultrasonic thickness gauge 171

    List of sensors 171

    References

    Article Sources and Contributors 179

    Image Sources, Licenses and Contributors 182

    Article Licenses

    License 184

  • 1Signal processing

    Analog signal processingAnalog signal processing is any signal processing conducted on analog signals by analog means. "Analog" indicatessomething that is mathematically represented as a set of continuous values. This differs from "digital" which uses aseries of discrete quantities to represent signal. Analog values are typically represented as a voltage, electric current,or electric charge around components in the electronic devices. An error or noise affecting such physical quantitieswill result in a corresponding error in the signals represented by such physical quantities.Examples of analog signal processing include crossover filters in loudspeakers, "bass", "treble" and "volume"controls on stereos, and "tint" controls on TVs. Common analog processing elements include capacitors, resistors,inductors and transistors.

    Tools used in analog signal processingA system's behavior can be mathematically modeled and is represented in the time domain as h(t) and in thefrequency domain as H(s), where s is a complex number in the form of s=a+ib, or s=a+jb in electrical engineeringterms (electrical engineers use j because current is represented by the variable i). Input signals are usually called x(t)or X(s) and output signals are usually called y(t) or Y(s).

    ConvolutionConvolution is the basic concept in signal processing that states an input signal can be combined with the system'sfunction to find the output signal. It is the integral of the product of two waveforms after one has reversed andshifted; the symbol for convolution is *.

    That is the convolution integral and is used to find the convolution of a signal and a system; typically a = - and b =+.Consider two waveforms f and g. By calculating the convolution, we determine how much a reversed function gmust be shifted along the x-axis to become identical to function f. The convolution function essentially reverses andslides function g along the axis, and calculates the integral of their (f and the reversed and shifted g) product for eachpossible amount of sliding. When the functions match, the value of (f*g) is maximized. This occurs because whenpositive areas (peaks) or negative areas (troughs) are multiplied, they contribute to the integral.

  • Analog signal processing 2

    Fourier transformThe Fourier transform is a function that transforms a signal or system in the time domain into the frequency domain,but it only works for certain ones. The constraint on which systems or signals can be transformed by the FourierTransform is that:

    This is the Fourier transform integral:

    Most of the time the Fourier transform integral isn't used to determine the transform. Usually a table of transformpairs is used to find the Fourier transform of a signal or system. The inverse Fourier transform is used to go fromfrequency domain to time domain:

    Each signal or system that can be transformed has a unique Fourier transform; there is only one time signal and onefrequency signal that goes together.

    Laplace transformThe Laplace transform is a generalized Fourier transform. It allows a transform of any system or signal because it isa transform into the complex plane instead of just the j line like the Fourier transform. The major difference is thatthe Laplace transform has a region of convergence for which the transform is valid. This implies that a signal infrequency may have more than one signal in time; the correct time signal for the transform is determined by theregion of convergence. If the region of convergence includes the j axis, j can be substituted into the Laplacetransform for s and it's the same as the Fourier transform. The Laplace transform is:

    and the inverse Laplace transform, if all the singularities of X(s) are in the left half of the complex plane, is:

    Bode plotsBode plots are plots of magnitude vs. frequency and phase vs. frequency for a system. The magnitude axis is inDecibel (dB). The phase axis is in either degrees or radians. The frequency axes are in a logarithmic scale. These areuseful because for sinusoidal inputs, the output is the input multiplied by the value of the magnitude plot at thefrequency and shifted by the value of the phase plot at the frequency.

    Domains

    Time domainThis is the domain that most people are familiar with. A plot in the time domain shows the amplitude of the signalwith respect to time.

  • Analog signal processing 3

    Frequency domainA plot in the frequency domain shows either the phase shift or magnitude of a signal at each frequency that it existsat. These can be found by taking the Fourier transform of a time signal and are plotted similarly to a bode plot.

    SignalsWhile any signal can be used in analog signal processing, there are many types of signals that are used veryfrequently.

    SinusoidsSinusoids are the building block of analog signal processing. All real world signals can be represented as an infinitesum of sinusoidal functions via a Fourier series. A sinusoidal function can be represented in terms of an exponentialby the application of Euler's Formula.

    ImpulseAn impulse (Dirac delta function) is defined as a signal that has an infinite magnitude and an infinitesimally narrowwidth with an area under it of one, centered at zero. An impulse can be represented as an infinite sum of sinusoidsthat includes all possible frequencies. It is not, in reality, possible to generate such a signal, but it can be sufficientlyapproximated with a large amplitude, narrow pulse, to produce the theoretical impulse response in a network to ahigh degree of accuracy. The symbol for an impulse is (t). If an impulse is used as an input to a system, the outputis known as the impulse response. The impulse response defines the system because all possible frequencies arerepresented in the input.

    StepA unit step function, also called the Heaviside step function, is a signal that has a magnitude of zero before zero anda magnitude of one after zero. The symbol for a unit step is u(t). If a step is used as the input to a system, the outputis called the step response. The step response shows how a system responds to a sudden input, similar to turning on aswitch. The period before the output stabilizes is called the transient part of a signal. The step response can bemultiplied with other signals to show how the system responds when an input is suddenly turned on.The unit step function is related to the Dirac delta function by;

    Systems

    Linear time-invariant (LTI)Linearity means that if you have two inputs and two corresponding outputs, if you take a linear combination of those two inputs you will get a linear combination of the outputs. An example of a linear system is a first order low-pass or high-pass filter. Linear systems are made out of analog devices that demonstrate linear properties. These devices don't have to be entirely linear, but must have a region of operation that is linear. An operational amplifier is a non-linear device, but has a region of operation that is linear, so it can be modeled as linear within that region of operation. Time-invariance means it doesn't matter when you start a system, the same output will result. For example, if you have a system and put an input into it today, you would get the same output if you started the system tomorrow instead. There aren't any real systems that are LTI, but many systems can be modeled as LTI for simplicity in determining what their output will be. All systems have some dependence on things like temperature, signal level or other factors that cause them to be non-linear or non-time-invariant, but most are stable enough to model as LTI.

  • Analog signal processing 4

    Linearity and time-invariance are important because they are the only types of systems that can be easily solvedusing conventional analog signal processing methods. Once a system becomes non-linear or non-time-invariant, itbecomes a non-linear differential equations problem, and there are very few of those that can actually be solved.(Haykin & Van Veen 2003)

    Common systemsSome common systems used in everyday life are filters, AM/FM radio, electric guitars and musical instrumentamplifiers. Filters are used in almost everything that has electronic circuitry. Radio and television are good examplesof everyday uses of filters. When a channel is changed on an analog television set or radio, an analog filter is used topick out the carrier frequency on the input signal. Once it's isolated, the television or radio information beingbroadcast is used to form the picture and/or sound. Another common analog system is an electric guitar and itsamplifier. The guitar uses a magnet with a coil wrapped around it (inductor) to turn the vibration of the strings into asmall electric current. The current is then filtered, amplified and sent to a speaker in the amplifier. Most amplifiersare analog because they are easier and cheaper to make than making a digital amplifier. There are also many analogguitar effects pedals, although a large number of pedals are now digital (they turn the input current into a digitizedvalue, perform an operation on it, then convert it back into an analog signal).

    References Haykin, Simon, and Barry Van Veen. Signals and Systems. 2nd ed. Hoboken, NJ: John Wiley and Sons, Inc.,

    2003. McClellan, James H., Ronald W. Schafer, and Mark A. Yoder. Signal Processing First. Upper Saddle River, NJ:

    Pearson Education, Inc., 2003.

    Fourier transformThe Fourier transform is a mathematical operation that decomposes a function into its constituent frequencies,known as its frequency spectrum. For instance, the transform of a musical chord made up of pure notes (withoutovertones) is a mathematical representation of the amplitudes and phases of the individual notes that make it up. Thecomposite waveform depends on time, and therefore is called the time domain representation. The frequencyspectrum is a function of frequency and is called the frequency domain representation. Each value of the function is acomplex number (called complex amplitude) that encodes both a magnitude and phase component. The term "Fouriertransform" refers to both the transform operation and to the complex-valued function it produces.In the case of a periodic function, like the musical chord, the Fourier transform can be simplified to the calculationof a discrete set of complex amplitudes, called Fourier series coefficients. Also, when a time-domain function issampled to facilitate storage and/or computer-processing, it is still possible to recreate a version of the originalFourier transform according to the Poisson summation formula, also known as discrete-time Fourier transform.These topics are addressed in separate articles. For an overview of those and other related operations, refer toFourier analysis or List of Fourier-related transforms.

  • Fourier transform 5

    DefinitionThere are several common conventions for defining the Fourier transform of an integrable function : R C(Kaiser 1994). This article will use the definition:

    for every real number .

    When the independent variable x represents time (with SI unit of seconds), the transform variable representsfrequency (in hertz). Under suitable conditions, can be reconstructed from by the inverse transform:

    for every real numberx.

    For other common conventions and notations, including using the angular frequency instead of the frequency ,see Other conventions and Other notations below. The Fourier transform on Euclidean space is treated separately, inwhich the variable x often represents position and momentum.

    IntroductionThe motivation for the Fourier transform comes from the study of Fourier series. In the study of Fourier series,complicated functions are written as the sum of simple waves mathematically represented by sines and cosines. Dueto the properties of sine and cosine, it is possible to recover the amplitude of each wave in the sum by an integral. Inmany cases it is desirable to use Euler's formula, which states that e2i=cos2+isin2, to write Fourier seriesin terms of the basic waves e2i. This has the advantage of simplifying many of the formulas involved, and providesa formulation for Fourier series that more closely resembles the definition followed in this article. Re-writing sinesand cosines as complex exponentials makes it necessary for the Fourier coefficients to be complex valued. The usualinterpretation of this complex number is that it gives both the amplitude (or size) of the wave present in the functionand the phase (or the initial angle) of the wave. These complex exponentials sometimes contain negative"frequencies". If is measured in seconds, then the waves e2i and e2i both complete one cycle per second, butthey represent different frequencies in the Fourier transform. Hence, frequency no longer measures the number ofcycles per unit time, but is still closely related.There is a close connection between the definition of Fourier series and the Fourier transform for functions whichare zero outside of an interval. For such a function, we can calculate its Fourier series on any interval that includesthe points where is not identically zero. The Fourier transform is also defined for such a function. As we increasethe length of the interval on which we calculate the Fourier series, then the Fourier series coefficients begin to looklike the Fourier transform and the sum of the Fourier series of begins to look like the inverse Fourier transform. Toexplain this more precisely, suppose that T is large enough so that the interval [T/2,T/2] contains the interval onwhich is not identically zero. Then the n-th series coefficient cn is given by:

    Comparing this to the definition of the Fourier transform, it follows that since (x) is zero outside[T/2,T/2]. Thus the Fourier coefficients are just the values of the Fourier transform sampled on a grid of width 1/T.As T increases the Fourier coefficients more closely represent the Fourier transform of the function.Under appropriate conditions, the sum of the Fourier series of will equal the function . In other words, can bewritten:

    where the last sum is simply the first sum rewritten using the definitions n=n/T, and =(n+1)/Tn/T=1/T.

  • Fourier transform 6

    This second sum is a Riemann sum, and so by letting T it will converge to the integral for the inverse Fouriertransform given in the definition section. Under suitable conditions this argument may be made precise (Stein &Shakarchi 2003).In the study of Fourier series the numbers cn could be thought of as the "amount" of the wave in the Fourier series of. Similarly, as seen above, the Fourier transform can be thought of as a function that measures how much of eachindividual frequency is present in our function , and we can recombine these waves by using an integral (or"continuous sum") to reproduce the original function.The following images provide a visual illustration of how the Fourier transform measures whether a frequency ispresent in a particular function. The function depicted oscillates at 3 hertz (if t measuresseconds) and tends quickly to 0. This function was specially chosen to have a real Fourier transform which can easilybe plotted. The first image contains its graph. In order to calculate we must integrate e2i(3t)(t). The secondimage shows the plot of the real and imaginary parts of this function. The real part of the integrand is almost alwayspositive, this is because when (t) is negative, then the real part of e2i(3t) is negative as well. Because they oscillateat the same rate, when (t) is positive, so is the real part of e2i(3t). The result is that when you integrate the real partof the integrand you get a relatively large number (in this case 0.5). On the other hand, when you try to measure afrequency that is not present, as in the case when we look at , the integrand oscillates enough so that theintegral is very small. The general situation may be a bit more complicated than this, but this in spirit is how theFourier transform measures how much of an individual frequency is present in a function (t).

    Original function showingoscillation 3 hertz.

    Real and imaginary parts ofintegrand for Fourier transform

    at 3 hertz

    Real and imaginary parts ofintegrand for Fourier transform

    at 5 hertz

    Fourier transform with 3 and 5hertz labeled.

    Properties of the Fourier transformHere we assume f(x), g(x), and h(x) are integrable functions, are Lebesgue-measurable on the real line, and satisfy:

    We denote the Fourier transforms of these functions by , , and respectively.

    Basic propertiesThe Fourier transform has the following basic properties: (Pinsky 2002).Linearity

    For any complex numbers a and b, if h(x)=a(x)+bg(x), then Translation

    For any real number x0, if h(x)=(xx0), then Modulation

    For any real number 0, if h(x)=e2ix0(x), then .

    Scaling

  • Fourier transform 7

    For a non-zero real number a, if h(x)=(ax), then . The case a=1 leads to the

    time-reversal property, which states: if h(x)=(x), then .Conjugation

    If , then

    In particular, if is real, then one has the reality condition

    And if is purely imaginary, then Duality

    If then Convolution

    If , then

    Uniform continuity and the RiemannLebesgue lemma

    The rectangular function is Lebesgue integrable.

    The sinc function, which is the Fourier transformof the rectangular function, is bounded andcontinuous, but not Lebesgue integrable.

    The Fourier transform may be defined in some cases for non-integrablefunctions, but the Fourier transforms of integrable functions haveseveral strong properties.

    The Fourier transform of any integrable function is uniformly continuous and (Katznelson1976). By the RiemannLebesgue lemma (Stein & Weiss 1971),

    Furthermore, is bounded and continuous, but need not be integrable. For example, the Fourier transform of therectangular function, which is integrable, is the sinc function, which is not Lebesgue integrable, because its improperintegrals behave analogously to the alternating harmonic series, in converging to a sum without being absolutelyconvergent.

  • Fourier transform 8

    It is not generally possible to write the inverse transform as a Lebesgue integral. However, when both and areintegrable, the inverse equality

    holds almost everywhere. That is, the Fourier transform is injective on L1(R). (But if is continuous, then equalityholds for every x.)

    Plancherel theorem and Parseval's theorem

    Let f(x) and g(x) be integrable, and let and be their Fourier transforms. If f(x) and g(x) are alsosquare-integrable, then we have Parseval's theorem (Rudin 1987, p. 187):

    where the bar denotes complex conjugation.The Plancherel theorem, which is equivalent to Parseval's theorem, states (Rudin 1987, p. 186):

    The Plancherel theorem makes it possible to define the Fourier transform for functions in L2(R), as described inGeneralizations below. The Plancherel theorem has the interpretation in the sciences that the Fourier transformpreserves the energy of the original quantity. It should be noted that depending on the author either of these theoremsmight be referred to as the Plancherel theorem or as Parseval's theorem.See Pontryagin duality for a general formulation of this concept in the context of locally compact abelian groups.

    Poisson summation formulaThe Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summationof a function to values of the function's continuous Fourier transform. It has a variety of useful forms that are derivedfrom the basic one by application of the Fourier transform's scaling and time-shifting properties. One such formleads directly to a proof of the Nyquist-Shannon sampling theorem.

    Convolution theoremThe Fourier transform translates between convolution and multiplication of functions. If (x) and g(x) are integrablefunctions with Fourier transforms and respectively, then the Fourier transform of the convolution isgiven by the product of the Fourier transforms and (under other conventions for the definition of theFourier transform a constant factor may appear).This means that if:

    where denotes the convolution operation, then:

    In linear time invariant (LTI) system theory, it is common to interpret g(x) as the impulse response of an LTI systemwith input (x) and output h(x), since substituting the unit impulse for (x) yields h(x)= g(x). In this case, represents the frequency response of the system.Conversely, if (x) can be decomposed as the product of two square integrable functions p(x) and q(x), then theFourier transform of (x) is given by the convolution of the respective Fourier transforms and .

  • Fourier transform 9

    Cross-correlation theoremIn an analogous manner, it can be shown that if h(x) is the cross-correlation of (x) and g(x):

    then the Fourier transform of h(x) is:

    As a special case, the autocorrelation of function (x) is:

    for which

    EigenfunctionsOne important choice of an orthonormal basis for L2(R) is given by the Hermite functions

    where are the "probabilist's" Hermite polynomials, defined by Hen(x)= (1)nexp(x2/2)Dnexp(x2/2).

    Under this convention for the Fourier transform, we have that

    In other words, the Hermite functions form a complete orthonormal system of eigenfunctions for the Fouriertransform on L2(R) (Pinsky 2002). However, this choice of eigenfunctions is not unique. There are only fourdifferent eigenvalues of the Fourier transform (1 and i) and any linear combination of eigenfunctions with thesame eigenvalue gives another eigenfunction. As a consequence of this, it is possible to decompose L2(R) as a directsum of four spaces H0, H1, H2, and H3 where the Fourier transform acts on Hek simply by multiplication by i

    k. Thisapproach to define the Fourier transform is due to N. Wiener(Duoandikoetxea 2001). Among other properties,Hermite functions decrease exponentially fast in both frequency and time domains and they are used to define ageneralization of the Fourier transform, namely the fractional Fourier transform used in time-frequency analysis(Boashash 2003).

    Fourier transform on Euclidean spaceThe Fourier transform can be in any arbitrary number of dimensions n. As with the one-dimensional case there aremany conventions, for an integrable function (x) this article takes the definition:

    where x and are n-dimensional vectors, and x is the dot product of the vectors. The dot product is sometimeswritten as .All of the basic properties listed above hold for the n-dimensional Fourier transform, as do Plancherel's andParseval's theorem. When the function is integrable, the Fourier transform is still uniformly continuous and theRiemannLebesgue lemma holds. (Stein & Weiss 1971)

  • Fourier transform 10

    Uncertainty principle

    Generally speaking, the more concentrated f(x) is, the more spread out its Fourier transform must be. Inparticular, the scaling property of the Fourier transform may be seen as saying: if we "squeeze" a function in x, itsFourier transform "stretches out" in . It is not possible to arbitrarily concentrate both a function and its Fouriertransform.The trade-off between the compaction of a function and its Fourier transform can be formalized in the form of anuncertainty principle by viewing a function and its Fourier transform as conjugate variables with respect to thesymplectic form on the timefrequency domain: from the point of view of the linear canonical transformation, theFourier transform is rotation by 90 in the timefrequency domain, and preserves the symplectic form.Suppose (x) is an integrable and square-integrable function. Without loss of generality, assume that (x) isnormalized:

    It follows from the Plancherel theorem that is also normalized.The spread around x= 0 may be measured by the dispersion about zero (Pinsky 2002) defined by

    In probability terms, this is the second moment of about zero.The Uncertainty principle states that, if (x) is absolutely continuous and the functions x(x) and (x) are squareintegrable, then

    (Pinsky 2002).

    The equality is attained only in the case (hence ) where > 0is arbitrary and C1 is such that is L

    2normalized (Pinsky 2002). In other words, where is a (normalized) Gaussianfunction with variance 2, centered at zero, and its Fourier transform is a Gaussian function with variance 1/2.In fact, this inequality implies that:

    for any in R (Stein & Shakarchi 2003).In quantum mechanics, the momentum and position wave functions are Fourier transform pairs, to within a factor ofPlanck's constant. With this constant properly taken into account, the inequality above becomes the statement of theHeisenberg uncertainty principle (Stein & Shakarchi 2003).A stronger uncertainty principle is the Hirschman uncertainty principle which is expressed as:

    where H(p) is the differential entropy of the probability density function p(x):

    where the logarithms may be in any base which is consistent. The equality is attained for a Gaussian, as in theprevious case.

  • Fourier transform 11

    Spherical harmonicsLet the set of homogeneous harmonic polynomials of degree k on Rn be denoted by Ak. The set Ak consists of thesolid spherical harmonics of degree k. The solid spherical harmonics play a similar role in higher dimensions to theHermite polynomials in dimension one. Specifically, if f(x)=e|x|2P(x) for some P(x) in Ak, then

    . Let the set Hk be the closure in L2(Rn) of linear combinations of functions of the form f(|x|)P(x)

    where P(x) is in Ak. The space L2(Rn) is then a direct sum of the spaces Hk and the Fourier transform maps each

    space Hk to itself and is possible to characterize the action of the Fourier transform on each space Hk (Stein & Weiss1971). Let (x)=0(|x|)P(x) (with P(x) in Ak), then where

    Here J(n+2k2)/2 denotes the Bessel function of the first kind with order (n+2k2)/2. When k=0 this gives auseful formula for the Fourier transform of a radial function (Grafakos 2004).

    Restriction problemsIn higher dimensions it becomes interesting to study restriction problems for the Fourier transform. The Fouriertransform of an integrable function is continuous and the restriction of this function to any set is defined. But for asquare-integrable function the Fourier transform could be a general class of square integrable functions. As such, therestriction of the Fourier transform of an L2(Rn) function cannot be defined on sets of measure 0. It is still an activearea of study to understand restriction problems in Lp for 1

  • Fourier transform 12

    definition of the Fourier transform to general functions in L2(R) by continuity arguments. Further : L2(R) L2(R) isa unitary operator (Stein & Weiss 1971, Thm. 2.3). In particular, the image of L2(R) is itself under the Fouriertransform. The Fourier transform in L2(R) is no longer given by an ordinary Lebesgue integral, although it can becomputed by an improper integral, here meaning that for an L2 function f,

    where the limit is taken in the L2 sense. Many of the properties of the Fourier transform in L1 carry over to L2, by asuitable limiting argument.The definition of the Fourier transform can be extended to functions in Lp(R) for 1 p 2 by decomposing suchfunctions into a fat tail part in L2 plus a fat body part in L1. In each of these spaces, the Fourier transform of afunction in Lp(R) is again in Lp(R) by the HausdorffYoung inequality. However, except for p = 2, the image is noteasily characterized. Further extensions become more technical. The Fourier transform of functions in Lp for therange 2 < p < requires the study of distributions (Katznelson 1976). In fact, it can be shown that there are functionsin Lp with p>2 so that the Fourier transform is not defined as a function (Stein & Weiss 1971).

    Tempered distributionsThe Fourier transform maps the space of Schwartz functions to itself, and gives a homeomorphism of the space toitself (Stein & Weiss 1971). Because of this it is possible to define the Fourier transform of tempered distributions.These include all the integrable functions mentioned above, as well as well-behaved functions of polynomial growthand distributions of compact support, and have the added advantage that the Fourier transform of any tempereddistribution is again a tempered distribution.The following two facts provide some motivation for the definition of the Fourier transform of a distribution. First let and g be integrable functions, and let and be their Fourier transforms respectively. Then the Fourier transformobeys the following multiplication formula (Stein & Weiss 1971),

    Secondly, every integrable function defines a distribution T by the relation

    for all Schwartz functions .

    In fact, given a distribution T, we define the Fourier transform by the relation

    for all Schwartz functions .It follows that

    Distributions can be differentiated and the above mentioned compatibility of the Fourier transform withdifferentiation and convolution remains true for tempered distributions.

  • Fourier transform 13

    Generalizations

    FourierStieltjes transformThe Fourier transform of a finite Borel measure on Rn is given by (Pinsky 2002):

    This transform continues to enjoy many of the properties of the Fourier transform of integrable functions. Onenotable difference is that the RiemannLebesgue lemma fails for measures (Katznelson 1976). In the case thatd=(x)dx, then the formula above reduces to the usual definition for the Fourier transform of . In the case that isthe probability distribution associated to a random variable X, the Fourier-Stieltjes transform is closely related to thecharacteristic function, but the typical conventions in probability theory take eix instead of e2ix (Pinsky 2002). Inthe case when the distribution has a probability density function this definition reduces to the Fourier transformapplied to the probability density function, again with a different choice of constants.The Fourier transform may be used to give a characterization of continuous measures. Bochner's theoremcharacterizes which functions may arise as the FourierStieltjes transform of a measure (Katznelson 1976).Furthermore, the Dirac delta function is not a function but it is a finite Borel measure. Its Fourier transform is aconstant function (whose specific value depends upon the form of the Fourier transform used).

    Locally compact abelian groupsThe Fourier transform may be generalized to any locally compact abelian group. A locally compact abelian group isan abelian group which is at the same time a locally compact Hausdorff topological space so that the groupoperations are continuous. If G is a locally compact abelian group, it has a translation invariant measure , calledHaar measure. For a locally compact abelian group G it is possible to place a topology on the set of characters sothat is also a locally compact abelian group. For a function in L1(G) it is possible to define the Fouriertransform by (Katznelson 1976):

    Locally compact Hausdorff spaceThe Fourier transform may be generalized to any locally compact Hausdorff space, which recovers the topology butloses the group structure.Given a locally compact Hausdorff topological space X, the space A=C0(X) of continuous complex-valued functionson X which vanish at infinity is in a natural way a commutative C*-algebra, via pointwise addition, multiplication,complex conjugation, and with norm as the uniform norm. Conversely, the characters of this algebra A, denoted

    are naturally a topological space, and can be identified with evaluation at a point of x, and one has an isometricisomorphism In the case where X=R is the real line, this is exactly the Fourier transform.

  • Fourier transform 14

    Non-abelian groupsThe Fourier transform can also be defined for functions on a non-abelian group, provided that the group is compact.Unlike the Fourier transform on an abelian group, which is scalar-valued, the Fourier transform on a non-abeliangroup is operator-valued (Hewitt & Ross 1971, Chapter 8). The Fourier transform on compact groups is a major toolin representation theory (Knapp 2001) and non-commutative harmonic analysis.Let G be a compact Hausdorff topological group. Let denote the collection of all isomorphism classes offinite-dimensional irreducible unitary representations, along with a definite choice of representation U() on theHilbert space H of finite dimension d for each . If is a finite Borel measure on G, then the FourierStieltjestransform of is the operator on H defined by

    where is the complex-conjugate representation of U() acting on H. As in the abelian case, if is absolutelycontinuous with respect to the left-invariant probability measure on G, then it is represented as

    for some L1(). In this case, one identifies the Fourier transform of with the FourierStieltjes transform of .The mapping defines an isomorphism between the Banach space M(G) of finite Borel measures (see rcaspace) and a closed subspace of the Banach space C() consisting of all sequences E=(E) indexed by of(bounded) linear operators E:HH for which the norm

    is finite. The "convolution theorem" asserts that, furthermore, this isomorphism of Banach spaces is in fact anisomorphism of C* algebras into a subspace of C(), in which M(G) is equipped with the product given byconvolution of measures and C() the product given by multiplication of operators in each index .The Peter-Weyl theorem holds, and a version of the Fourier inversion formula (Plancherel's theorem) follows: ifL2(G), then

    where the summation is understood as convergent in the L2 sense.The generalization of the Fourier transform to the noncommutative situation has also in part contributed to thedevelopment of noncommutative geometry. In this context, a categorical generalization of the Fourier transform tononcommutative groups is Tannaka-Krein duality, which replaces the group of characters with the category ofrepresentations. However, this loses the connection with harmonic functions.

    AlternativesIn signal processing terms, a function (of time) is a representation of a signal with perfect time resolution, but nofrequency information, while the Fourier transform has perfect frequency resolution, but no time information: themagnitude of the Fourier transform at a point is how much frequency content there is, but location is only given byphase (argument of the Fourier transform at a point), and standing waves are not localized in time a sine wavecontinues out to infinity, without decaying. This limits the usefulness of the Fourier transform for analyzing signalsthat are localized in time, notably transients, or any signal of finite extent.As alternatives to the Fourier transform, in time-frequency analysis, one uses time-frequency transforms or time-frequency distributions to represent signals in a form that has some time information and some frequency information by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as the short-time Fourier transform or fractional Fourier transform, or can use different functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the

  • Fourier transform 15

    (continuous) Fourier transform being the continuous wavelet transform. (Boashash 2003).

    Applications

    Analysis of differential equationsFourier transforms and the closely related Laplace transforms are widely used in solving differential equations. TheFourier transform is compatible with differentiation in the following sense: if f(x) is a differentiable function withFourier transform , then the Fourier transform of its derivative is given by . This can be used totransform differential equations into algebraic equations. Note that this technique only applies to problems whosedomain is the whole set of real numbers. By extending the Fourier transform to functions of several variables partialdifferential equations with domain Rn can also be translated into algebraic equations.

    Fourier transform spectroscopyThe Fourier transform is also used in nuclear magnetic resonance (NMR) and in other kinds of spectroscopy, e.g.infrared (FTIR). In NMR an exponentially-shaped free induction decay (FID) signal is acquired in the time domainand Fourier-transformed to a Lorentzian line-shape in the frequency domain. The Fourier transform is also used inmagnetic resonance imaging (MRI) and mass spectrometry.

    Other notationsOther common notations for include:

    Denoting the Fourier transform by a capital letter corresponding to the letter of function being transformed (such asf(x) and F()) is especially common in the sciences and engineering. In electronics, the omega () is often usedinstead of due to its interpretation as angular frequency, sometimes it is written as F(j), where j is the imaginaryunit, to indicate its relationship with the Laplace transform, and sometimes it is written informally as F(2f) in orderto use ordinary frequency.

    The interpretation of the complex function may be aided by expressing it in polar coordinate form

    in terms of the two real functions A() and () where:

    is the amplitude and

    is the phase (see arg function).Then the inverse transform can be written:

    which is a recombination of all the frequency components of (x). Each component is a complex sinusoid of theform e2ix whose amplitude is A() and whose initial phase angle (at x=0) is ().The Fourier transform may be thought of as a mapping on function spaces. This mapping is here denoted and

    is used to denote the Fourier transform of the function f. This mapping is linear, which means that can also be seen as a linear transformation on the function space and implies that the standard notation in linear algebra of applying a linear transformation to a vector (here the function f) can be used to write instead of . Since the result of applying the Fourier transform is again a function, we can be interested in the value of this

  • Fourier transform 16

    function evaluated at the value for its variable, and this is denoted either as or as . Notice that in the former case, itis implicitly understood that is applied first to f and then the resulting function is evaluated at , not the other wayaround.In mathematics and various applied sciences it is often necessary to distinguish between a function f and the value off when its variable equals x, denoted f(x). This means that a notation like formally can be interpreted asthe Fourier transform of the values of f at x. Despite this flaw, the previous notation appears frequently, often when aparticular function or a function of a particular variable is to be transformed.For example, is sometimes used to express that the Fourier transform of a rectangularfunction is a sinc function,or is used to express the shift property of the Fourier transform.Notice, that the last example is only correct under the assumption that the transformed function is a function of x, notof x0.

    Other conventionsThe Fourier transform can also be written in terms of angular frequency: = 2 whose units are radians persecond.The substitution = /(2) into the formulas above produces this convention:

    Under this convention, the inverse transform becomes:

    Unlike the convention followed in this article, when the Fourier transform is defined this way, it is no longer aunitary transformation on L2(Rn). There is also less symmetry between the formulas for the Fourier transform and itsinverse.Another convention is to split the factor of (2)n evenly between the Fourier transform and its inverse, which leadsto definitions:

    Under this convention, the Fourier transform is again a unitary transformation on L2(Rn). It also restores thesymmetry between the Fourier transform and its inverse.Variations of all three conventions can be created by conjugating the complex-exponential kernel of both the forwardand the reverse transform. The signs must be opposites. Other than that, the choice is (again) a matter of convention.

  • Fourier transform 17

    Summary of popular forms of the Fourier transform

    ordinary frequency (hertz) unitary

    angular frequency (rad/s) non-unitary

    unitary

    As discussed above, the characteristic function of a random variable is the same as the FourierStieltjes transform ofits distribution measure, but in this context it is typical to take a different convention for the constants. Typicallycharacteristic function is defined .

    As in the case of the "non-unitary angular frequency" convention above, there is no factor of 2 appearing in eitherof the integral, or in the exponential. Unlike any of the conventions appearing above, this convention takes theopposite sign in the exponential.

    Tables of important Fourier transformsThe following tables record some closed form Fourier transforms. For functions (x) , g(x) and h(x) denote theirFourier transforms by , , and respectively. Only the three most common conventions are included. It maybe useful to notice that entry 105 gives a relationship between the Fourier transform of a function and the originalfunction, which can be seen as relating the Fourier transform and its inverse.

    Functional relationshipsThe Fourier transforms in this table may be found in (Erdlyi 1954) or the appendix of (Kammler 2000).

    Function Fourier transformunitary, ordinary

    frequency

    Fourier transformunitary, angular frequency

    Fourier transformnon-unitary, angular

    frequency

    Remarks

    Definition

    101 Linearity

    102 Shift in time domain

    103 Shift in frequency domain, dualof 102

    104 Scaling in the time domain. Ifis large, then is

    concentrated around 0 and

    spreads out and

    flattens.

  • Fourier transform 18

    105 Duality. Here needs to becalculated using the samemethod as Fourier transformcolumn. Results from swapping"dummy" variables of and or or .

    106

    107 This is the dual of 106

    108 The notation denotes theconvolution of and thisrule is the convolution theorem

    109 This is the dual of 108

    110 For a purely real Hermitian symmetry. indicates the complexconjugate.

    111 For a purely realeven function

    , and are purely real even functions.

    112 For a purely realodd function

    , and are purely imaginary odd functions.

    Square-integrable functionsThe Fourier transforms in this table may be found in (Campbell & Foster 1948), (Erdlyi 1954), or the appendix of(Kammler 2000).

    Function Fourier transformunitary, ordinary

    frequency

    Fourier transformunitary, angular frequency

    Fourier transformnon-unitary, angular

    frequency

    Remarks

    201 The rectangular pulse and thenormalized sinc function, here definedas sinc(x) = sin(x)/(x)

    202 Dual of rule 201. The rectangularfunction is an ideal low-pass filter, andthe sinc function is the non-causalimpulse response of such a filter.

    203 The function tri(x) is the triangularfunction

    204 Dual of rule 203.

    205 The function u(x) is the Heaviside unitstep function and a>0.

  • Fourier transform 19

    206 This shows that, for the unitary Fouriertransforms, the Gaussian functionexp(x2) is its own Fourier transformfor some choice of . For this to beintegrable we must have Re()>0.

    207 For a>0. That is, the Fourier transformof a decaying exponential function is aLorentzian function.

    208 Hyperbolic secant is its own Fouriertransform

    209 is the Hermite's polynomial. Ifthen the Gauss-Hermite

    functions are eigenfunctions of theFourier transform operator. For aderivation, see Hermite polynomial.The formula reduces to 206 for .

    DistributionsThe Fourier transforms in this table may be found in (Erdlyi 1954) or the appendix of (Kammler 2000).

    Function Fourier transformunitary, ordinary frequency

    Fourier transformunitary, angular frequency

    Fourier transformnon-unitary, angular

    frequency

    Remarks

    301 The distribution ()denotes the Dirac deltafunction.

    302 Dual of rule 301.

    303 This follows from 103and 301.

    304 This follows from rules101 and 303 using Euler'sformula:

    305 This follows from 101and 303 using

    306

    307

  • Fourier transform 20

    308 Here, n is a naturalnumber and isthe n-th distributionderivative of the Diracdelta function. This rulefollows from rules 107and 301. Combining thisrule with 101, we cantransform allpolynomials.

    309 Here sgn() is the signfunction. Note that 1/x isnot a distribution. It isnecessary to use theCauchy principal valuewhen testing againstSchwartz functions. Thisrule is useful in studyingthe Hilbert transform.

    310 1/xn is the homogeneousdistribution defined bythe distributionalderivative

    311 This formula is valid for0 > > 1. For > 0some singular terms ariseat the origin that can befound by differentiating318. If Re > 1, then

    is a locallyintegrable function, andso a tempereddistribution. The function

    is aholomorphic functionfrom the right half-planeto the space of tempereddistributions. It admits aunique meromorphicextension to a tempereddistribution, also denoted

    for 2, 4,...(See homogeneousdistribution.)

    312 The dual of rule 309. Thistime the Fouriertransforms need to beconsidered as Cauchyprincipal value.

    313 The function u(x) is theHeaviside unit stepfunction; this followsfrom rules 101, 301, and312.

  • Fourier transform 21

    314 This function is known asthe Dirac comb function.This result can be derivedfrom 302 and 102,together with the fact that

    as

    distributions.

    315 The function J0(x) is thezeroth order Besselfunction of first kind.

    316 This is a generalization of315. The function Jn(x) isthe n-th order Besselfunction of first kind. Thefunction Tn(x) is theChebyshev polynomial ofthe first kind.

    317 is theEulerMascheroniconstant.

    318 This formula is valid for1 > > 0. Usedifferentiation to driveformula for higherexponents. is theHeaviside function.

    Two-dimensional functions

    Function Fourier transformunitary, ordinary frequency

    Fourier transformunitary, angular frequency

    Fourier transformnon-unitary, angular frequency

    400

    401

    402

    RemarksTo 400: The variables x, y, x, y, x and y are real numbers. The integrals are taken over the entire plane.To 401: Both functions are Gaussians, which may not have unit volume.To 402: The function is defined by circ(r)=1 0r1, and is 0 otherwise. This is the Airy distribution, and isexpressed using J1 (the order 1 Bessel function of the first kind). (Stein & Weiss 1971, Thm. IV.3.3)

  • Fourier transform 22

    Formulas for general n-dimensional functions

    Function Fourier transformunitary, ordinary frequency

    Fourier transformunitary, angular frequency

    Fourier transformnon-unitary, angular frequency

    500

    501

    502

    RemarksTo 501: The function [0,1] is the indicator function of the interval [0,1]. The function (x) is the gamma function.The function Jn/2+ is a Bessel function of the first kind, with order n/2+. Taking n=2 and =0 produces 402.(Stein & Weiss 1971, Thm. 4.13)To 502: See Riesz potential. The formula also holds for all n, n1,... by analytic continuation, but then thefunction and its Fourier transforms need to be understood as suitably regularized tempered distributions. Seehomogeneous distribution.

    References Boashash, B., ed. (2003), Time-Frequency Signal Analysis and Processing: A Comprehensive Reference, Oxford:

    Elsevier Science, ISBN0080443354 Bochner S., Chandrasekharan K. (1949), Fourier Transforms, Princeton University Press Bracewell, R. N. (2000), The Fourier Transform and Its Applications (3rd ed.), Boston: McGraw-Hill,

    ISBN0071160434. Campbell, George; Foster, Ronald (1948), Fourier Integrals for Practical Applications, New York: D. Van

    Nostrand Company, Inc.. Duoandikoetxea, Javier (2001), Fourier Analysis, American Mathematical Society, ISBN0-8218-2172-5. Dym, H; McKean, H (1985), Fourier Series and Integrals, Academic Press, ISBN978-0122264511. Erdlyi, Arthur, ed. (1954), Tables of Integral Transforms, 1, New Your: McGraw-Hill Fourier, J. B. Joseph (1822), Thorie Analytique de la Chaleur [1], Paris Grafakos, Loukas (2004), Classical and Modern Fourier Analysis, Prentice-Hall, ISBN0-13-035399-X. Hewitt, Edwin; Ross, Kenneth A. (1970), Abstract harmonic analysis. Vol. II: Structure and analysis for compact

    groups. Analysis on locally compact Abelian groups, Die Grundlehren der mathematischen Wissenschaften, Band152, Berlin, New York: Springer-Verlag, MR0262773.

    Hrmander, L. (1976), Linear Partial Differential Operators, Volume 1, Springer-Verlag, ISBN978-3540006626. James, J.F. (2011), A Student's Guide to Fourier Transforms (3rd ed.), New York: Cambridge University Press,

    ISBN978-0-521-17683-5. Kaiser, Gerald (1994), A Friendly Guide to Wavelets, Birkhuser, ISBN0-8176-3711-7 Kammler, David (2000), A First Course in Fourier Analysis, Prentice Hall, ISBN0-13-578782-3 Katznelson, Yitzhak (1976), An introduction to Harmonic Analysis, Dover, ISBN0-486-63331-4 Knapp, Anthony W. (2001), Representation Theory of Semisimple Groups: An Overview Based on Examples [2],

    Princeton University Press, ISBN978-0-691-09089-4 Pinsky, Mark (2002), Introduction to Fourier Analysis and Wavelets, Brooks/Cole, ISBN0-534-37660-6 Polyanin, A. D.; Manzhirov, A. V. (1998), Handbook of Integral Equations, Boca Raton: CRC Press,

    ISBN0-8493-2876-4.

  • Fourier transform 23

    Rudin, Walter (1987), Real and Complex Analysis (Third ed.), Singapore: McGraw Hill, ISBN0-07-100276-6. Stein, Elias; Shakarchi, Rami (2003), Fourier Analysis: An introduction, Princeton University Press,

    ISBN0-691-11384-X. Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton, N.J.:

    Princeton University Press, ISBN978-0-691-08078-9. Wilson, R. G. (1995), Fourier Series and Optical Transform Techniques in Contemporary Optics, New York:

    Wiley, ISBN0471303577. Yosida, K. (1968), Functional Analysis, Springer-Verlag, ISBN3-540-58654-7.

    External links The Discrete Fourier Transformation (DFT): Definition and numerical examples [3] - A Matlab tutorial Fourier Series Applet [4] (Tip: drag magnitude or phase dots up or down to change the wave form). Stephan Bernsee's FFTlab [5] (Java Applet) Stanford Video Course on the Fourier Transform [6]

    Weisstein, Eric W., "Fourier Transform [7]" from MathWorld. The DFT Pied: Mastering The Fourier Transform in One Day [8] at The DSP Dimension An Interactive Flash Tutorial for the Fourier Transform [9]

    References[1] http:/ / books. google. com/ ?id=TDQJAAAAIAAJ& printsec=frontcover& dq=Th%C3%A9orie+ analytique+ de+ la+ chaleur& q[2] http:/ / books. google. com/ ?id=QCcW1h835pwC[3] http:/ / www. nbtwiki. net/ doku. php?id=tutorial:the_discrete_fourier_transformation_dft[4] http:/ / www. westga. edu/ ~jhasbun/ osp/ Fourier. htm[5] http:/ / www. dspdimension. com/ fftlab/[6] http:/ / www. academicearth. com/ courses/ the-fourier-transform-and-its-applications[7] http:/ / mathworld. wolfram. com/ FourierTransform. html[8] http:/ / www. dspdimension. com/ admin/ dft-a-pied/[9] http:/ / www. fourier-series. com/ f-transform/ index. html

  • Fast Fourier transform 24

    Fast Fourier transformA fast Fourier transform (FFT) is an efficient algorithm to compute the discrete Fourier transform (DFT) and itsinverse. There are many distinct FFT algorithms involving a wide range of mathematics, from simplecomplex-number arithmetic to group theory and number theory; this article gives an overview of the availabletechniques and some of their general properties, while the specific algorithms are described in subsidiary articleslinked below.A DFT decomposes a sequence of values into components of different frequencies. This operation is useful in manyfields (see discrete Fourier transform for properties and applications of the transform) but computing it directly fromthe definition is often too slow to be practical. An FFT is a way to compute the same result more quickly: computinga DFT of N points in the naive way, using the definition, takes O(N2) arithmetical operations, while an FFT cancompute the same result in only O(N log N) operations. The difference in speed can be substantial, especially forlong data sets where N may be in the thousands or millionsin practice, the computation time can be reduced byseveral orders of magnitude in such cases, and the improvement is roughly proportional to N / log(N). This hugeimprovement made many DFT-based algorithms practical; FFTs are of great importance to a wide variety ofapplications, from digital signal processing and solving partial differential equations to algorithms for quickmultiplication of large integers.The most well known FFT algorithms depend upon the factorization of N, but there are FFTs with O(NlogN)complexity for all N, even for primeN. Many FFT algorithms only depend on the fact that is an thprimitive root of unity, and thus can be applied to analogous transforms over any finite field, such asnumber-theoretic transforms. Since the inverse DFT is the same as the DFT, but with the opposite sign in theexponent and a 1/N factor, any FFT algorithm can easily be adapted for it.The FFT has been described as "the most important numerical algorithm of our lifetime".[1]

    Definition and speedAn FFT computes the DFT and produces exactly the same result as evaluating the DFT definition directly; the onlydifference is that an FFT is much faster. (In the presence of round-off error, many FFT algorithms are also muchmore accurate than evaluating the DFT definition directly, as discussed below.)Let x0, ...., xN-1 be complex numbers. The DFT is defined by the formula

    Evaluating this definition directly requires O(N2) operations: there are N outputs Xk, and each output requires a sumof N terms. An FFT is any method to compute the same results in O(N log N) operations. More precisely, all knownFFT algorithms require (N log N) operations (technically, O only denotes an upper bound), although there is noknown proof that better complexity is impossible.To illustrate the savings of an FFT, consider the count of complex multiplications and additions. Evaluating theDFT's sums directly involves N2 complex multiplications and N(N1) complex additions [of which O(N) operationscan be saved by eliminating trivial operations such as multiplications by 1]. The well-known radix-2 CooleyTukeyalgorithm, for N a power of 2, can compute the same result with only (N/2)log2N complex multiplies (again,ignoring simplifications of multiplications by 1 and similar) and Nlog2N complex additions. In practice, actualperformance on modern computers is usually dominated by factors other than arithmetic and is a complicated subject(see, e.g., Frigo & Johnson, 2005), but the overall improvement from O(N2) to O(N log N) remains.

  • Fast Fourier transform 25

    Algorithms

    CooleyTukey algorithmBy far the most common FFT is the CooleyTukey algorithm. This is a divide and conquer algorithm thatrecursively breaks down a DFT of any composite size N = N1N2 into many smaller DFTs of sizes N1 and N2, alongwith O(N) multiplications by complex roots of unity traditionally called twiddle factors (after Gentleman and Sande,1966).This method (and the general idea of an FFT) was popularized by a publication of J. W. Cooley and J. W. Tukey in1965, but it was later discovered (Heideman & Burrus, 1984) that those two authors had independently re-inventedan algorithm known to Carl Friedrich Gauss around 1805 (and subsequently rediscovered several times in limitedforms).

    The most well-known use of the CooleyTukey algorithm is to divide the transform into two pieces of size ateach step, and is therefore limited to power-of-two sizes, but any factorization can be used in general (as was knownto both Gauss and Cooley/Tukey). These are called the radix-2 and mixed-radix cases, respectively (and othervariants such as the split-radix FFT have their own names as well). Although the basic idea is recursive, mosttraditional implementations rearrange the algorithm to avoid explicit recursion. Also, because the CooleyTukeyalgorithm breaks the DFT into smaller DFTs, it can be combined arbitrarily with any other algorithm for the DFT,such as those described below.

    Other FFT algorithms

    There are other FFT algorithms distinct from CooleyTukey. For with coprime and , one canuse the Prime-Factor (Good-Thomas) algorithm (PFA), based on the Chinese Remainder Theorem, to factorize theDFT similarly to CooleyTukey but without the twiddle factors. The Rader-Brenner algorithm (1976) is aCooleyTukey-like factorization but with purely imaginary twiddle factors, reducing multiplications at the cost ofincreased additions and reduced numerical stability; it was later superseded by the split-radix variant ofCooleyTukey (which achieves the same multiplication count but with fewer additions and without sacrificingaccuracy). Algorithms that recursively factorize the DFT into smaller operations other than DFTs include the Bruunand QFT algorithms. (The Rader-Brenner and QFT algorithms were proposed for power-of-two sizes, but it ispossible that they could be adapted to general composite . Bruun's algorithm applies to arbitrary even compositesizes.) Bruun's algorithm, in particular, is based on interpreting the FFT as a recursive factorization of thepolynomial , here into real-coefficient polynomials of the form and .Another polynomial viewpoint is exploited by the Winograd algorithm, which factorizes into cyclotomicpolynomialsthese often have coefficients of 1, 0, or 1, and therefore require few (if any) multiplications, soWinograd can be used to obtain minimal-multiplication FFTs and is often used to find efficient algorithms for smallfactors. Indeed, Winograd showed that the DFT can be computed with only irrational multiplications,leading to a proven achievable lower bound on the number of multiplications for power-of-two sizes; unfortunately,this comes at the cost of many more additions, a tradeoff no longer favorable on modern processors with hardwaremultipliers. In particular, Winograd also makes use of the PFA as well as an algorithm by Rader for FFTs of primesizes.Rader's algorithm, exploiting the existence of a generator for the multiplicative group modulo prime , expresses aDFT of prime size as a cyclic convolution of (composite) size , which can then be computed by a pair ofordinary FFTs via the convolution theorem (although Winograd uses other convolution methods). Anotherprime-size FFT is due to L. I. Bluestein, and is sometimes called the chirp-z algorithm; it also re-expresses a DFT asa convolution, but this time of the same size (which can be zero-padded to a power of two and evaluated by radix-2CooleyTukey FFTs, for example), via the identity .

  • Fast Fourier transform 26

    FFT algorithms specialized for real and/or symmetric dataIn many applications, the input data for the DFT are purely real, in which case the outputs satisfy the symmetry

    and efficient FFT algorithms have been designed for this situation (see e.g. Sorensen, 1987). One approach consistsof taking an ordinary algorithm (e.g. CooleyTukey) and removing the redundant parts of the computation, savingroughly a factor of two in time and memory. Alternatively, it is possible to express an even-length real-input DFT asa complex DFT of half the length (whose real and imaginary parts are the even/odd elements of the original realdata), followed by O(N) post-processing operations.It was once believed that real-input DFTs could be more efficiently computed by means of the discrete Hartleytransform (DHT), but it was subsequently argued that a specialized real-input DFT algorithm (FFT) can typically befound that requires fewer operations than the corresponding DHT algorithm (FHT) for the same number of inputs.Bruun's algorithm (above) is another method that was initially proposed to take advantage of real inputs, but it hasnot proved popular.There are further FFT specializations for the cases of real data that have even/odd symmetry, in which case one cangain another factor of (roughly) two in time and memory and the DFT becomes the discrete cosine/sine transform(s)(DCT/DST). Instead of directly modifying an FFT algorithm for these cases, DCTs/DSTs can also be computed viaFFTs of real data combined with O(N) pre/post processing.

    Computational issues

    Bounds on complexity and operation countsA fundamental question of longstanding theoretical interest is to prove lower bounds on the complexity and exactoperation counts of fast Fourier transforms, and many open problems remain. It is not even rigorously provedwhether DFTs truly require (i.e., order or greater) operations, even for the simple case ofpower of two sizes, although no algorithms with lower complexity are known. In particular, the count of arithmeticoperations is usually the focus of such questions, although actual performance on modern-day computers isdetermined by many other factors such as cache or CPU pipeline optimization.Following pioneering work by Winograd (1978), a tight lower bound is known for the number of realmultiplications required by an FFT. It can be shown that only irrational realmultiplications are required to compute a DFT of power-of-two length . Moreover, explicit algorithmsthat achieve this count are known (Heideman & Burrus, 1986; Duhamel, 1990). Unfortunately, these algorithmsrequire too many additions to be practical, at least on modern computers with hardware multipliers.A tight lower bound is not known on the number of required additions, although lower bounds have been provedunder some restrictive assumptions on the algorithms. In 1973, Morgenstern proved an lower boundon the addition count for algorithms where the multiplicative constants have bounded magnitudes (which is true formost but not all FFT algorithms). Pan (1986) proved an lower bound assuming a bound on a measureof the FFT algorithm's "asynchronicity", but the generality of this assumption is unclear. For the case ofpower-of-two , Papadimitriou (1979) argued that the number of complex-number additions achievedby CooleyTukey algorithms is optimal under certain assumptions on the graph of the algorithm (his assumptionsimply, among other things, that no additive identities in the roots of unity are exploited). (This argument wouldimply that at least real additions are required, although this is not a tight bound because extra additionsare required as part of complex-number multiplications.) Thus far, no published FFT algorithm has achieved fewerthan complex-number additions (or their equivalent) for power-of-two .A third problem is to minimize the total number of real multiplications and additions, sometimes called the "arithmetic complexity" (although in this context it is the exact count and not the asymptotic complexity that is being

  • Fast Fourier transform 27

    considered). Again, no tight lower bound has been proven. Since 1968, however, the lowest published count forpower-of-two was long achieved by the split-radix FFT algorithm, which requires real multiplications and additions

    for . This was recently reduced to (Johnson and Frigo, 2007; Lundy and Van Buskirk, 2007). A slightly larger count

    (but still better than split radix for N256) was shown to be provably optimal for N512 under additional restrictionson the possible algorithms (split-radix-like flowgraphs with unit-modulus multiplicative factors), by reduction to aSatisfiability Modulo Theories problem solvable by brute force (Haynal & Haynal, 2011).Most of the attempts to lower or prove the complexity of FFT algorithms have focused on the ordinary complex-datacase, because it is the simplest. However, complex-data FFTs are so closely related to algorithms for relatedproblems such as real-data FFTs, discrete cosine transforms, discrete Hartley transforms, and so on, that anyimprovement in one of these would immediately lead to improvements in the others (Duhamel & Vetterli, 1990).

    Accuracy and approximationsAll of the FFT algorithms discussed below compute the DFT exactly (in exact arithmetic, i.e. neglectingfloating-point errors). A few "FFT" algorithms have been proposed, however, that compute the DFT approximately,with an error that can be made arbitrarily small at the expense of increased computations. Such algorithms trade theapproximation error for increased speed or other properties. For example, an approximate FFT algorithm byEdelman et al. (1999) achieves lower communication requirements for parallel computing with the help of a fastmultipole method. A wavelet-based approximate FFT by Guo and Burrus (1996) takes sparse inputs/outputs(time/frequency localization) into account more efficiently than is possible with an exact FFT. Another algorithm forapproximate computation of a subset of the DFT outputs is due to Shentov et al. (1995). Only the Edelman algorithmworks equally well for sparse and non-sparse data, however, since it is based on the compressibility (rank deficiency)of the Fourier matrix itself rather than the compressibility (sparsity) of the data.Even the "exact" FFT algorithms have errors when finite-precision floating-point arithmetic is used, but these errorsare typically quite small; most FFT algorithms, e.g. CooleyTukey, have excellent numerical properties as aconsequence of the pairwise summation structure of the algorithms. The upper bound on the relative error for theCooleyTukey algorithm is O( log N), compared to O(N3/2) for the nave DFT formula (Gentleman and Sande,1966), where is the machine floating-point relative precision. In fact, the root mean square (rms) errors are muchbetter than these upper bounds, being only O( log N) for CooleyTukey and O( N) for the nave DFT(Schatzman, 1996). These results, however, are very sensitive to the accuracy of the twiddle factors used in the FFT(i.e. the trigonometric function values), and it is not unusual for incautious FFT implementations to have much worseaccuracy, e.g. if they use inaccurate trigonometric recurrence formulas. Some FFTs other than CooleyTukey, suchas the Rader-Brenner algorithm, are intrinsically less stable.In fixed-point arithmetic, the finite-precision errors accumulated by FFT algorithms are worse, with rms errorsgrowing as O(N) for the CooleyTukey algorithm (Welch, 1969). Moreover, even achieving this accuracy requirescareful attention to scaling in order to minimize the loss of precision, and fixed-point FFT algorithms involverescaling at each intermediate stage of decompositions like CooleyTukey.To verify the correctness of an FFT implementation, rigorous guarantees can be obtained in O(N log N) time by asimple procedure checking the linearity, impulse-response, and time-shift properties of the transform on randominputs (Ergn, 1995).

  • Fast Fourier transform 28

    Multidimensional FFTsAs defined in the multidimensional DFT article, the multidimensional DFT

    transforms an array with a -dimensional vector of indices by a set of nestedsummations (over for each ), where the division , defined as

    , is performed element-wise. Equivalently, it is simply the composition of asequence of sets of one-dimensional DFTs, performed along one dimension at a time (in any order).This compositional viewpoint immediately provides the simplest and most common multidimensional DFTalgorithm, known as the row-column algorithm (after the two-dimensional case, below). That is, one simplyperforms a sequence of one-dimensional FFTs (by any of the above algorithms): first you transform along the dimension, then along the dimension, and so on (or actually, any ordering will work). This method is easilyshown to have the usual complexity, where is the total number of data pointstransformed. In particular, there are transforms of size , etcetera, so the complexity of the sequence ofFFTs is:

    In two dimensions, the can be viewed as an matrix, and this algorithm corresponds to first performingthe FFT of all the rows and then of all the columns (or vice versa), hence the name.In more than two dimensions, it is often advantageous for cache locality to group the dimensions recursively. Forexample, a three-dimensional FFT might first perform two-dimensional FFTs of each planar "slice" for each fixed

    , and then perform the one-dimensional FFTs along the direction. More generally, an asymptotically optimalcache-oblivious algorithm consists of recursively dividing the dimensions into two groups and

    that are transformed recursively (rounding if is not even) (see Frigo and Johnson, 2005). Still,this remains a straightforward variation of the row-column algorithm that ultimately requires only a one-dimensionalFFT algorithm as the base case, and still has complexity. Yet another variation is to perform matrixtranspositions in between transforming subsequent dimensions, so that the transforms operate on contiguous data;this is especially important for out-of-core and distributed memory situations where accessing non-contiguous data isextremely time-consuming.There are other multidimensional FFT algorithms that are distinct from the row-column algorithm, although all ofthem have complexity. Perhaps the simplest non-row-column FFT is the vector-radix FFT algorithm,which is a generalization of the ordinary CooleyTukey algorithm where one divides the transform dimensions by avector of radices at each step. (This may also have cache benefits.) The simplest case ofvector-radix is where all of the radices are equal (e.g. vector-radix-2 divides all of the dimensions by two), but this isnot necessary. Vector radix with only a single non-unit radix at a time, i.e. , isessentially a row-column algorithm. Other, more complicated, methods include polynomial transform algorithms dueto Nussbaumer (1977), which view the transform in terms of convolutions and polynomial products. See Duhameland Vetterli (1990) for more information and references.

  • Fast Fourier transform 29

    Other generalizationsAn O(N5/2 logN) generalization to spherical harmonics on the sphere S2 with N2 nodes was described byMohlenkamp (1999), along with an algorithm conjectured (but not proven) to have O(N2 log2N) complexity;Mohlenkamp also provides an implementation in the libftsh library [2]. A spherical-harmonic algorithm with O(N2

    logN) complexity is described by Rokhlin and Tygert (2006).Various groups have also published "FFT" algorithms for non-equispaced data, as reviewed in Potts et al. (2001).Such algorithms do not strictly compute the DFT (which is only defined for equispaced data), but rather someapproximation thereof (a non-uniform discrete Fourier transform, or NDFT, which itself is often computed onlyapproximately).

    References[1] (Strang, 1994)[2] http:/ / www. math. ohiou. edu/ ~mjm/ research/ libftsh. html

    Brenner, N.; Rader, C. (1976). "A New Principle for Fast Fourier Transformation". IEEE Acoustics, Speech &Signal Processing 24 (3): 264266. doi:10.1109/TASSP.1976.1162805.

    Brigham, E. O. (2002). The Fast Fourier Transform. New York: Prentice-Hall Cooley, James W.; Tukey, John W. (1965). "An algorithm for the machine calculation of complex Fourier series".

    Math. Comput. 19 (90): 297301. doi:10.1090/S0025-5718-1965-0178586-1. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein, 2001. Introduction to Algorithms,

    2nd. ed. MIT Press and McGraw-Hill. ISBN 0-262-03293-7. Especially chapter 30, "Polynomials and the FFT." Duhamel, Pierre (1990). "Algorithms meeting the lower bounds on the multiplicative complexity of length-

    DFTs and their connection with practical algorithms". IEEE Trans. Acoust. Speech. Sig. Proc. 38 (9): 1504151.doi:10.1109/29.60070.

    P. Duhamel and M. Vetterli, 1990, Fast Fourier transforms: a tutorial review and a state of the art(doi:10.1016/0165-1684(90)90158-U), Signal Processing 19: 259299.

    A. Edelman, P. McCorquodale, and S. Toledo, 1999, The Future Fast Fourier Transform?(doi:10.1137/S1064827597316266), SIAM J. Sci. Computing 20: 10941114.

    D. F. Elliott, & K. R. Rao, 1982, Fast transforms: Algorithms, analyses, applications. New York: AcademicPress.

    Funda Ergn, 1995, Testing multivariate linear functions: Overcoming the generator bottleneck(doi:10.1145/225058.225167), Proc. 27th ACM Symposium on the Theory of Computing: 407416.

    M. Frigo and S. G. Johnson, 2005, " The Design and Implementation of FFTW3 (http:/ / fftw. org/fftw-paper-ieee. pdf)," Proceedings of the IEEE 93: 216231.

    Carl Friedrich Gauss, 1866. "Nachlass: Theoria interpolationis methodo nova tractata," Werke band 3, 265327.Gttingen: Knigliche Gesellschaft der Wissenschaften.

    W. M. Gentleman and G. Sande, 1966, "Fast Fourier transformsfor fun and profit," Proc. AFIPS 29: 563578.doi:10.1145/1464291.1464352

    H. Guo and C. S. Burrus, 1996, Fast approximate Fourier transform via wavelets transform(doi:10.1117/12.255236), Proc. SPIE Intl. Soc. Opt. Eng. 2825: 250259.

    H. Guo, G. A. Sitton, C. S. Burrus, 1994, The Quick Discrete Fourier Transform(doi:10.1109/ICASSP.1994.389994), Proc. IEEE Conf. Acoust. Speech and Sig. Processing (ICASSP) 3:445448.

    Steve Haynal and Heidi Haynal, " Generating and Searching Families of FFT Algorithms (http:/ / jsat. ewi.tudelft. nl/ content/ volume7/ JSAT7_13_Haynal. pdf)", Journal on Satisfiability, Boolean Modeling andComputation vol. 7, pp. 145187 (2011).

  • Fast Fourier transform 30

    Heideman, M. T.; Johnson, D. H.; Burrus, C. S. (1984). "Gauss and the history of the fast Fourier transform".IEEE ASSP Magazine 1 (4): 1421. doi:10.1109/MASSP.1984.1162257.

    Heideman, Michael T.; Burrus, C. Sidney (1986). "On the number of multiplications necessary to compute alength- DFT". IEEE Trans. Acoust. Speech. Sig. Proc. 34 (1): 9195. doi:10.1109/TASSP.1986.1164785.

    S. G. Johnson and M. Frigo, 2007. " A modified split-radix FFT with fewer arithmetic operations (http:/ / www.fftw. org/ newsplit. pdf)," IEEE Trans. Signal Processing 55 (1): 111119.

    T. Lundy and J. Van Buskirk, 2007. "A new matrix approach to real FFTs and convolutions of length 2k,"Computing 80 (1): 23-45.

    Kent, Ray D. and Read, Charles (2002). Acoustic Analysis of Speech. ISBN 0-7693-0112-6. Cites Strang, G.(1994)/MayJune). Wavelets. American Scientist, 82, 250-255.

    Morgenstern, Jacques (1973). "Note on a lower bound of the linear complexity of the fast Fourier transform". J.ACM 20 (2): 305306. doi:10.1145/321752.321761.

    Mohlenkamp, M. J. (1999). "A fast transform for spherical harmonics" (http:/ / www. math. ohiou. edu/ ~mjm/research/ MOHLEN1999P. pdf). J. Fourier Anal. Appl. 5 (2-3): 159184. doi:10.1007/BF01261607.

    Nussbaumer, H. J. (1977). "Digital filtering using polynomial transforms". Electronics Lett. 13 (13): 386387.doi:10.1049/el:19770280.

    V. Pan, 1986, The trade-off between the additive complexity and the asyncronicity of linear and bilinearalgorithms (doi:10.1016/0020-0190(86)90035-9), Information Proc. Lett. 22: 11-14.

    Christos H. Papadimitriou, 1979, Optimality of the fast Fourier transform (doi:10.1145/322108.322118), J. ACM26: 95-102.

    D. Potts, G. Steidl, and M. Tasche, 2001. " Fast Fourier transforms for nonequispaced data: A tutorial (http:/ /www. tu-chemnitz. de/ ~potts/ paper/ ndft. pdf)", in: J.J. Benedetto and P. Ferreira (Eds.), Modern SamplingTheory: Mathematics and Applications (Birkhauser).

    Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Chapter 12. Fast Fourier Transform" (http:/ /apps. nrbook. com/ empanel/ index. html#pg=600), Numerical Recipes: The Art of Scientific Computing (3rd ed.),New York: Cambridge University Press, ISBN978-0-521-88068-8

    Rokhlin, Vladimir; Tygert, Mark (2006). "Fast algorithms for spherical harmonic expansions". SIAM J. Sci.Computing 27 (6): 19031928. doi:10.1137/050623073.

    James C. Schatzman, 1996, Accuracy of the discrete Fourier transform and the fast Fourier transform (http:/ /portal. acm. org/ citation. cfm?id=240432), SIAM J. Sci. Comput. 17: 11501166.

    Shentov, O. V.; Mitra, S. K.; Heute, U.; Hossen, A. N. (1995). "Subband DFT. I. Definition, interpretations andextensions". Signal Processing 41 (3): 261277. doi:10.1016/0165-1684(94)00103-7.

    Sorensen, H. V.; Jones, D. L.; Heideman, M. T.; Burrus, C. S. (1987). "Real-valued fast Fourier transformalgorithms". IEEE Trans. Acoust. Speech Sig. Processing 35 (35): 849863. doi:10.1109/TASSP.1987.1165220.See also Sorensen, H.; Jones, D.; Heideman, M.; Burrus, C. (1987). "Corrections to "Real-valued fast Fouriertransform algorithms"". IEEE Transactions on Acoustics, Speech, and Signal Processing 35 (9): 13531353.doi:10.1109/TASSP.1987.1165284.

    Welch, Peter D. (1969). "A fixed-point fast Fourier transform error analysis". IEEE Trans. Audio Electroacoustics17 (2): 151157. doi:10.1109/TAU.1969.1162035.

    Winograd, S. (1978). "On computing the discrete Fourier transform". Math. Computation 32 (141): 175199.doi:10.1090/S0025-5718-1978-0468306-4. JSTOR2006266.

  • Fast Fourier transform 31

    External links Fast Fourier Algorithm (http:/ / www. cs. pitt. edu/ ~kirk/ cs1501/ animations/ FFT. html) Fast Fourier Transforms (http:/ / cnx. org/ content/ col10550/ ), Connexions online book edited by C. Sidney

    Burrus, with chapters by C. Sidney Burrus, Ivan Selesnick, Markus Pueschel, Matteo Frigo, and Steven G.Johnson (2008).

    Links to FFT code and information online. (http:/ / www. fftw. org/ links. html) National Taiwan University FFT (http:/ / www. cmlab. csie. ntu. edu. tw/ cml/ dsp/ training/ coding/ transform/

    fft. html) FFT programming in C++ CooleyTukey algorithm. (http:/ / www. librow. com/ articles/ article-10) Online documentation, links, book, and code. (http:/ / www. jjj. de/ fxt/ ) Using FFT to construct aggregate probability distributions (http:/ / www. vosesoftware. com/ ModelRiskHelp/

    index. htm#Aggregate_distributions/ Aggregate_modeling_-_Fast_Fourier_Transform_FFT_method. htm) Sri Welaratna, " 30 years of FFT Analyzers (http:/ / www. dataphysics. com/ support/ library/ downloads/ articles/

    DP-30 Years of FFT. pdf)", Sound and Vibration (January 1997, 30th anniversary issue). A historical review ofhardware FFT devices.

    FFT Basics and Case Study Using Multi-Instrument (http:/ / www. multi-instrument. com/ doc/ D1002/FFT_Basics_and_Case_Study_using_Multi-Instrument_D1002. pdf)

    FFT Textbook notes, PPTs, Videos (http:/ / numericalmethods. eng. usf. edu/ topics/ fft. html) at HolisticNumerical Methods Institute.

    ALGLIB FFT Code (http:/ / www. alglib. net/ fasttransforms/ fft. php) GPL Licensed multilanguage (VBA, C++,Pascal, etc.) numerical analysis and data processing library.

    Laplace transformIn mathematics, the Laplace transform is a widely used integral transform. Denoted , it is a linearoperator of a function f(t) with a real argument t (t 0) that transforms it to a function F(s) with a complex arguments. This transformation is essentially bijective for the majority of practical uses; the respective pairs of f(t) and F(s)are matched in tables. The Laplace transform has the useful property that many relationships and operations over theoriginals f(t) correspond to simpler relationships and operations over the images F(s).[1] The Laplace transform hasmany important applications throughout the sciences. It is named for Pierre-Simon Laplace who introduced thetransform in his work on probability theory.The Laplace transform is related to the Fourier transform, but whereas the Fourier transform resolves a function orsignal into its modes of vibration, the Laplace transform resolves a function into its moments. Like the Fouriertransform, the Laplace transform is used for solving differential and integral equations. In physics and engineering, itis used for analysis of linear time-invariant systems such as electrical circuits, harmonic oscillators, optical devices,and mechanical systems. In this analysis, the Laplace transform is often interpreted as a transformation from thetime-domain, in which inputs and outputs are functions of time, to the frequency-domain, where the same inputs andoutputs are functions of complex angular frequency, in radians per unit time. Given a simple mathematical orfunctional description of an input or output to a system, the Laplace transform provides an alternative functionaldescription that often simplifies the process of analyzing the behavior of the system, or in synthesizing a new systembased on a set of specifications.

  • Laplace transform 32

    HistoryThe Laplace transform is named after mathematician and astronomer Pierre-Simon Laplace, who used the transformin his work on probability theory. From 1744, Leonhard Euler investigated integrals of the form

    as solutions of differential equations but did not pursue the matter very far.[2] Joseph Louis Lagrange was an admirerof Euler and, in his work on integrating probability density functions, investigated expressions of the form

    which some modern historians have interpreted within modern Laplace transform theory.[3] [4]

    These types of integrals seem first to have attracted Laplace's attention in 1782 where he was following in the spiritof Euler in using the integrals themselves as solutions of equations.[5] However, in 1785, Laplace took the criticalstep forward when, rather than just looking for a solution in the form of an integral, he started to apply the transformsin the sense that was later to become popular. He used an integral of the form:

    akin to a Mellin transform, to transform the whole of a difference equation, in order to look for solutions of thetransformed equation. He then went on to apply the Laplace transform in the same way and started to derive some ofits properties, beginning to appreciate its potential power.[6]

    Laplace also recognised that Joseph Fourier's method of Fourier series for solving the diffusion equation could onlyapply to a limited region of space as the solutions were periodic. In 1809, Laplace applied his transform to findsolutions that diffused indefinitely in space.[7]

    Formal definitionThe Laplace transform of a function f(t), defined for all real numbers t 0, is the function F(s), defined by:

    The parameter s is a complex number:

    with real numbers and .The meaning of the integral depends on types of functions of interest. A necessary condition for existence of theintegral is that must be locally integrable on [0,). For locally integrable functions that decay at infinity or are ofexponential type, the integral can be understood as a (proper) Lebesgue integral. However, for many applications itis necessary to regard it as a conditionally convergent improper integral at . Still more generally, the integral can beunderstood in a weak sense, and this is dealt with below.One can define the Laplace transform of a finite Borel measure by the Lebesgue integral[8]

    An important special case is where is a probability measure or, even more specifically, the Dirac delta function. Inoperational calculus, the Laplace transform of a measure is often treated as though the measure came from adistribution function . In that case, to avoid potential confusion, one often writes

    where the lower limit of 0 is short notation to mean

  • Laplace transform 33

    This limit emphasizes that any point mass located at 0 is entirely captured by the Laplace transform. Although withthe Lebesgue integral, it is not necessary to take such a limit, it does appear more naturally in connection with theLaplaceStieltjes transform.

    Probability theoryIn pure and applied probability, the Laplace transform is defined by means of an expectation value. If X is a randomvariable with probability density function , then the Laplace transform of is given by the expectation

    By abuse of language, this is referred to as the Laplace transform of the random variable X itself. Replacing s by tgives the moment generating function of X. The Laplace transform has applications throughout probability theory,including first passage times of stochastic processes such as Markov chains, and renewal theory.

    Bilateral Laplace transformWhen one says "the Laplace transform" without qualification, the unilateral or one-sided transform is normallyintended. The Laplace transform can be alternatively defined as the bilateral Laplace transform or two-sidedLaplace transform by extending the limits of integration to be the entire real axis. If that is done the commonunilateral transform simply becomes a special case of the bilateral transform where the definition of the functionbeing transformed is multiplied by the Heaviside step function.The bilateral Laplace transform is defined as follows:

    Inverse Laplace transformThe inverse Laplace transform is given by the following complex integral, which is known by various names (theBromwich integral, the Fourier-Mellin integral, and Mellin's inverse formula):

    where is a real number so that the contour path of integration is in the region of convergence of F(s). Analternative formula for the inverse Laplace transform is given by Post's inversion formula.

    Region of convergenceIf is a locally integrable function (or more generally a Borel measure locally of bounded variation), then theLaplace transform F(s) of converges provided that the limit

    exists. The Laplace transform converges absolutely if the integral

    exists (as a proper Lebesgue integral). The Laplace transform is usually understood as conditionally convergent,meaning that it converges in the former instead of the latter sense.The set of values for which F(s) converges absolutely is either of the form Re{s} > a or else Re{s} a, where a is an extended real constant, a. (This follows from the dominated convergence theorem.) The constant a is

  • Laplace transform 34

    known as the abscissa of absolute convergence, and depends on the growth behavior of (t).[9] Analogously, thetwo-sided transform converges absolutely in a strip of the form a < Re{s} < b, and possibly including the linesRe{s}=a or Re{s}=b.[10] The subset of values of s for which the Laplace transform converges absolutely is calledthe region of absolute convergence or the domain of absolute convergence. In the two-sided case, it is sometimescalled the strip of absolute convergence. The Laplace transform is analytic in the region of absolute convergence.Similarly, the set of values for which F(s) converges (conditionally or absolutely) is known as the region ofconditional convergence, or simply the region of convergence (ROC). If the Laplace transform converges(conditionally) at s=s0, then it automatically converges for all s with Re{s}>Re{s0}. Therefore the region ofconvergence is a half-plane of the form Re{s}>a, possibly including some points of the boundary line Re{s}=a. Inthe region of convergence Re{s} > Re{s0}, the Laplace transform of can be expressed by integrating by parts as theintegral

    That is, in the region of convergence F(s) can effectively be expressed as the absolutely convergent Laplac