9277

60
Real-time frequency estimation of analog encoder outputs A Bachelor Thesis P.J.H. Maas DCT 2008.081 Engineering Thesis Committee: Prof. Dr. Ir. M. Steinbuch (supervisor) Dr. Ir. M.J.G. van de Molengraft (coach) Ir. R.J.E. Merry (coach) Eindhoven University of Technology Department of Mechanical Engineering Control Systems Technology Group Eindhoven, June 8, 2008

description

9277

Transcript of 9277

  • Real-time frequency estimation ofanalog encoder outputs

    A Bachelor Thesis

    P.J.H. Maas

    DCT 2008.081

    Engineering Thesis Committee:

    Prof. Dr. Ir. M. Steinbuch (supervisor)Dr. Ir. M.J.G. van de Molengraft (coach)Ir. R.J.E. Merry (coach)

    Eindhoven University of TechnologyDepartment of Mechanical EngineeringControl Systems Technology Group

    Eindhoven, June 8, 2008

  • 2

  • Summary

    In motion control applications, optical incremental encoders are used to obtain position andvelocity information. Instead of using the zero transition data, which is commonly done andintroduces an error of half an encoder count, the analog waveform output of the SinCos encodercan be used. The analog waveforms are not perfect sinusoidal.

    The Heydemann model [1] compensates for amplitude, phase and offset errors. The wave-forms have a sawtooth like form. When the fundamental frequency of this waveform is known,the sinusoid can be reconstructed.

    To obtain a fundamental frequency estimation in real-time, an algorithm has been developed,based on a literature study. This algorithm makes use of a least square fit. A number of si-nusoids with different frequencies is fitted on the encoder signal. The sinusoid with the leastsquared error has a frequency close to the frequency of the encoder signal. Because the dip inthe error has a width of at least fs/N in Hz, with fs the sampling frequency and N the samplelength, not all frequencies have to be considered, enhancing calculation time. In between theconsidered frequencies a parabolic interpolation is purposed. Furthermore an off the shelve con-tinuous wavelet transform method has been considered.

    The frequency of the sawtooth like signal is assumed to differ from 50 to 1000 Hz. The samplelength, N , used is 125 samples. This is 12.5 ms at a sampling frequency of 10 kHz.

    The frequency of the signal can be estimated with a systematic error of 2 % at low frequen-cies and a negligible systematic error at high frequencies. The stochastic error, the standarddeviation on the measurements, is below 0.1 Hz for all frequencies. The accuracy can be furtherenhanced by choosing more sinusoids with different frequencies to compare to the signal. Thisincreases the calculation time also.

    The least squares frequency estimation algorithm has been tested in simulations and on a mea-surement setup. The results are compared to the results of the wavelet transform method.Concluded from these experiments is that the wavelet transform has a lower stochastic errorand a better low frequency (below 50 Hz) performance. The least squares fitting algorithm, onthe other hand, has a better performance when the frequency is changing at low frequencies (forthe encoder this means that the speed is changing). This is because the frequency estimation canbe done on just 0.6 period of a sample instead of a full period, needed in the wavelet transform.

    3

  • 4

  • Samenvatting

    In bewegende geregelde systemen worden incrementele encoders gebruikt om positie- en snel-heidsinformatie te verkrijgen. Normaal gesproken wordt gebruik gemaakt van de nul doorganginformatie. Dat introduceerd een fout van een halve streepafstand. Er kan ook gebruik gemaaktworden van de analoge encoder uitgang van een SinCos encoder. Deze analoge signalen zijn niethelemaal sinus-vormig.

    Het Heydemann model [1] compenseert voor fouten in amplitude, fase en evenwichtsstand.The signalen hebben een zaagtandachtige vorm. Als de fundamentele frequentie van dit signaalbekend is, kan de sinusoide gereconstrueerd worden.

    Om een schatting van de fundamentele frequentie in real-time te verkrijgen, is een algoritmeontwikkeld, gebaseerd op een literatuur studie. Dit algoritme is maakt gebruik van een gemini-maliseerde kleinste kwadraten fout. Een aantal sinusoiden met verschillende frequenties wordtgefit op het encoder signaal. De sinusoide met de kleinste kwadratische fout heeft een frequentiedichtbij de frequentie van het encoder signaal. Omdat de dip in de fout een breedte heeft vanten minste fs/N in Hz, met fs de sampling frequentie en N de sample lengte, hoeven niet allefrequenties geprobeerd te worden, wat de rekentijd verbeterd. Tussen de geprobeerde frequentieswordt een parabolische interpolatie gebruikt. Verder is een bestaande continue wavelet trans-formatie methode bekeken.

    De frequentie van de zaagtand-vormige signalen wordt aangenomen tussen 50 en 1000 Hz. Thesample lengte, N , die gebruikt wordt is 125 samples. Dit is 12.5 ms bij een sampling frequentievan 10 kHz.

    De frequentie van het signaal kan geschat worden met een systematische fout van 2 % bij lagefrequenties en met een verwaarloosbare systematische fout bij hoge frequenties. De toevalligeafwijking, de standaard deviatie van de metingen, is onder de 0.1 Hz voor alle frequenties. Denauwkeurigheid kan verder worden verbeterd door meer sinusoiden met verschillende frequentieste vergelijken met het signaal. Hierdoor neemt de rekentijd ook toe.

    Het kleinste kwadraten algoritme voor frequentie schatting is getest in simulaties en op eenmeetopstelling. De resultaten zijn vergeleken met de resultaten van de wavelet transformatie.Uit deze experimenten wordt geconcludeerd dat de wavelet transformatie een lagere toeval-lige afwijking heeft en een betere prestatie bij lage frequenties (onder de 50 Hz). Het kleinstekwadraten algoritme heeft daarentegen een betere prestatie als de frequentie veranderd voorlage frequenties (voor de encoder betekend dit dat de snelheid veranderd). Dit komt doordat defrequentie schatting gedaan kan worden op slechts 0.6 periode van het sample, in plaats van eenvolledige periode, die bij de wavelet transformatie gebruikt wordt.

    5

  • 6

  • Contents

    Summary 2

    Samenvatting 3

    1 Introduction 91.1 Problem definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.2 Report outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    2 Specifications and the signal 112.1 Measurement setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.2 A sawtooth signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.3 Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    3 Literature study 153.1 The methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    3.1.1 A comparative study and wavelet methods . . . . . . . . . . . . . . . . . 153.1.2 Real-time frequency estimation in power systems . . . . . . . . . . . . . . 163.1.3 Real-time frequency estimation in sound signals . . . . . . . . . . . . . . . 17

    3.2 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    4 The least square algorithm 214.1 The least square algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214.2 Adaption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234.3 Notes on Matlab implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

    5 Simulation 255.1 Simulation setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255.2 Determination of the sample length and accuracy . . . . . . . . . . . . . . . . . . 265.3 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285.4 Other signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    5.4.1 A simple sinus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295.4.2 A square signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305.4.3 A saw wave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305.4.4 A triangle wave with offset . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    6 Comparison with a wavelet transform method 336.1 The simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336.2 The results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

    7

  • 87 Implementation 377.1 The algorithm and the setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377.2 The results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    8 Conclusion and recommendations 418.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418.2 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

    Bibliography 42

    A Solution for a and b 45

    B The width of a trough 47

    C The frequency estimation program 49

    D The Simulink block scheme 53

    E The interpolation subsystem 55

    F The frequency estimation program at 1 kHz sampling frequency 57

    G Simulation results 59

  • Chapter 1

    Introduction

    In many motion control applications, optical incremental encoders are used to obtain positionand velocity information. The working principle of optical incremental encoders is shown inFig. 1.1. A light source is placed above a rotating encoder disk. The light through the encoderdisk is captured on a quadrature light detector which transforms the input into two analog waveforms. The direction of motion and rotary position of the encoders can be retrieved from thewave forms.

    Lightsource Rotating

    encoderdisk

    Quadraturelightdetector

    Analogoutputs

    Figure 1.1: Optical incremental encoder principle

    Commonly, the rotary position is obtained by counting zero transitions of the analog waveforms.By only using the zero transitions, an error in the position measurement of maximum half anencoder count is introduced. Ideally, the two analog waveforms have a sinusoidal shape and arein quadrature, so they have a phase shift of 90 degrees with respect to each other. These analogwaveforms can be used to directly derive the position and velocity information.

    Due to encoder errors, the analog outputs of the waveforms are not perfect sinusoidal. Theoffset, phase and amplitude errors can be compensated for using the Heydemann model [1]. Themeasured analog outputs of the encoder have a sawtooth like shape rather than a sinusoidalshape. The Heydemann model does not compensate for this.

    The sawtooth like signals after the Heydemann correction have a fundamental frequency.When this frequency is known, the ideal sinusoidal signal can be reconstructed. The frequencyis changing with changing velocity. The sinusoids have the same fundamental frequency, so theestimation has only to be done on one of these signals.

    1.1 Problem definition

    This report is about finding a method for real-time frequency estimation. The signals for whichthe frequency has to be estimated come from a SinCos encoder and have a offset, phase and

    9

  • 10 CHAPTER 1. INTRODUCTION

    amplitude correction correction already. They are sawtooth like. The estimation method isdeveloped for these kind of signals in the first place, but it may be use full for other signals also.

    1.2 Report outline

    First a literature study is done in Chapter 3 to investigate the available methods of real-timefrequency estimation. Based on the findings of the literature study, a Least Square Fitting (LSF)algorithm is developed to estimate the fundamental frequency in real time. This is described inChapter 4. The algorithm is optimized and validated by means of simulation and experimentsin Chapters 5 and 7, respectively. It will also be tested on other kind of signals. In Chapter 6 acomparison between the LSF algorithm and an existing Continues Wavelet Transform (CWT)algorithm is made.

  • Chapter 2

    Specifications and the signal

    The signal output of the SinCos encoder has a fundamental frequency of a limited range, de-pending of the velocity of the rotation. The real-time frequency estimator has to estimate thefundamental frequency. In the following sections, the signal will be characterized and the re-quirements for the frequency estimator are specificated, but first the measurement setup will bedescribed.

    2.1 Measurement setup

    The encoder principle is shown in Fig. 1.1. In the measurement setup, this encoder disc, at-tached to a fly wheel, is rotated by a motor. A picture of the setup is shown in Fig. 2.1. Thismotor is feedback controlled on the zero transition signal of an additional encoder with 5000slits in a Simulink environment at a sampling frequency of 1 kHz. A simple PD controller isused. The analog encoder data of a 100 slit encoder is also measured with a sampling frequencyof 1 kHz and can also be obtained in Simulink.

    Figure 2.1: The measurement setup

    2.2 A sawtooth signal

    The measured output signal of the SinCos encoder, described in the introduction, is a sawtoothlike signal. This signal has a fundamental frequency, with with only odd harmonics. So there isf0, 3f0, 5f0, . . .. The amplitude decreases proportional to the inverse square of the harmonicnumber. The spectrum of a sawtooth like signal is shown in Fig. 2.2. The higher harmonics in

    11

  • 12 CHAPTER 2. SPECIFICATIONS AND THE SIGNAL

    the signal make the sharp peeks [2].

    Figure 2.2: A 250 Hz sawtooth like signal with its spectrum

    The real signal has less sharp peeks, and thus less important higher harmonics in its spectrum.The real signal is shown in Fig. 2.3. This measurement was done at a sampling frequency of 1kHz. The signal contains low noise influences. The signal does show an offset of -0.05 V. Thisoffset will be corrected by the Heydemann model.

    Figure 2.3: A sample of encoder measurement data

    2.3 Specifications

    There are several requirements, which the frequency estimator must satisfy. First of all thecalculations must be done very fast. The system is assumed to be feedback controlled at afrequency of 1000 Hz. This means that there is only 1 ms for A/D conversion, the Heydemanncorrection, the frequency estimation, the position estimation, the actual feedback control andthe D/A conversion. Lets assume that half the time can be used for the frequency estimation.That makes the upper bound for the calculation time 0.5 ms.

    Secondly the frequency range is set from 50 to 1000 Hz. A 100 slit encoder is used. Whenthe encoder rotates at a minimum speed of 0.5 rotations per second, the frequency of the signalwill be

    f = 0.5 100 = 50 Hz. (2.1)On the other hand, when the encoder rotates at 10 rotations per second, the frequency of thesignal will be

    f = 10 100 = 1000 Hz. (2.2)

  • 2.3. SPECIFICATIONS 13

    The assumption of a rotational speed between 0.5 and 10 rotations per second and the numberof slits on the encoder gives the frequency range from 50 to 1000 Hz.

    This maximum frequency also implies a sampling frequency. To satisfy the Shannon theorem[3], the sampling frequency should at least be twice the maximum frequency occurring in thesignal. In practice this frequency is chosen 10 times higher, So a sampling frequency of 10 kHzwill be used. So the large frequency range requires a high sampling frequency, that generates alot of data points.

    Next there is the accuracy. The frequencies present in a stationary signal can be calculatedexactly by the fourier transform when the signal length goes to infinity [3]. The frequency reso-lution goes to zero then. The length of the signal segment used is rather short to exclude mostof the history, thus to only estimate the current frequency. This means that the frequency willnot be calculated exactly with a Fourier transform based method. There will need to be madea trade off between history and accuracy. Furthermore the minimum signal length needed toestimate the fundamental frequency is the fundamental period. At low frequencies there is a alonger signal segment needed than at higher frequencies.

  • 14 CHAPTER 2. SPECIFICATIONS AND THE SIGNAL

  • Chapter 3

    Literature study

    To estimate the frequency content of a signal, the Fourier transform is commonly used. TheFourier transform is a stationary method though, which estimates frequency content withoutsaving the time information [9]. In real-time frequency estimation the time becomes important.The frequency content changes with time when the speed of the encoder changes. At everymoment the frequency information should be obtained instantaneous. Real-time frequency esti-mation asks for a non-stationary frequency estimation method.

    Real-time frequency estimation is a topic in several research fields. In electrical power sys-tems it is an important parameter. Due to generator-load mismatches the main frequency canchange [4]. Frequency analysis of myoelectric signals has been used to determine local musclefatigue during sustained muscle contractions [5]. The frequency content of a signal is also im-portant in speech and music recognition and manipulation and noise reduction [6], [7], [8]. Themethods used in these different application fields will be discussed in more detail in the nextsections.

    3.1 The methods

    3.1.1 A comparative study and wavelet methods

    In a comparative study by Karlsson et al. [5], four time-frequency analysis methods are com-pared. The short time Fourier transform (STFT) applies the Fourier transform over a rectangularwindowed part of the signal. Within the window the signal is assumed stationary. When thewindow is moved over the signal, the frequency content is determined for each time interval.The Wigner-Ville distribution (WVD) can be interpreted as energy distribution method. TheChoi-Williams distribution (CWD) is a time-frequency distribution and an expansion of theWigner-Ville distribution.

    The wavelet analysis calculates the correlation between the signal under consideration anda wavelet function (t). This analyzing wavelet function (t) is referred to as the motherwavelet. Every transformation method must satisfy the Heisenberg inequality, which states thatthe product of time resolution t and frequency resolution f (bandwidth-time product) islower bound by (3.1).

    tf 14pi

    (3.1)

    Whereas the STFT uses a fixed time-frequency resolution, the mother wavelet function (t) isscaled by the scaling parameter s. This scaling changes the central frequency of the waveletand the window length. For different frequencies different time scales can be used. As a resultlow frequency content can be analyzed on the needed large timescales with a high frequency

    15

  • 16 CHAPTER 3. LITERATURE STUDY

    Figure 3.1: Constant resolution time-frequency plane

    resolution, while high frequency content is analyzed with a smaller frequency resolution but ina far shorter time interval. The wavelet transform still satisfies the Heisenberg inequality (3.1).It is a multi resolution transform method. A graphical interpretation of the fixed time fre-quency resolution is shown in Fig. 3.1 and a graphical interpretation of the change of resolutionin a wavelet transform is shown in Fig. 3.2. The wavelet transform is extensively discussed in [9].

    Figure 3.2: Multi resolution time-frequency plane

    According to [5], the continuous wavelet transform (CWT) shows a better statistical perfor-mance than the other investigated analysis methods. These methods were not used in a real-timeanalysis though.

    3.1.2 Real-time frequency estimation in power systems

    As mentioned before, many methods for real-time frequency estimation are used in electric powersystems. These methods include Pronys estimation [4], a Kalman filter [10], a wavelet approach[11] and artificial neural networks [12]. These methods suffer from several drawbacks, as theyare described for their applications. Because they are used in a power system, they all assume tohave only higher harmonics with an amplitude of maximum 1 %. The heavy harmonic influencesare present in a sawtooth signal, though. Furthermore these methods assume a known nominal

  • 3.1. THE METHODS 17

    frequency. The frequency range of these methods is far too small. The wavelet transform showsa very high accuracy, but is applied to a very short frequency range.

    Artificial neural networks do not suffer from the above stated drawbacks but are not the pre-ferred method because of their black box idea. These systems are very useful when a lot ofinformation is missing. The artificial neural network described in [13] uses Hopfield type feed-back neural networks for real-time harmonic evaluation. The parallel processing provides highcomputational speeds. Results described in this article look promising but the system is nottested under high harmonic pollution; It is only tested in a power system.

    3.1.3 Real-time frequency estimation in sound signals

    In a feedback active noise control system real-time frequency estimation is used because thefrequency information is needed for the reference generator. This frequency estimator is basedon the adaptive notch filter (ANF) with constrained poles and zeros. It is described in [8].The estimation of the notch coefficients is done by a linearized minimal parameter estimationalgorithm. The accuracy of this method is not very good in noisy environments.

    Figure 3.3: Principle of fundamentalness

    A fundamental frequency extraction method is described in [6]. It is based on the conceptof fundamentalness and is a wavelet based method. The fundamentalness is defined to havemaximum value when frequency and amplitude modulation magnitudes are minimum and ithas a monotonic relation with the modulation magnitudes. The concept of fundamentalness isillustrated in Fig. 3.3. When no harmonic component is within the response area of the analyz-ing wavelet (a), fundamentalness provides the background noise level. When the fundamentalcomponent is inside the response area, but not at the characteristic frequency of the analyzingwavelet (b), fundamentalness is not very high, because of the low signal to noise ratio. When thefrequency of the fundamental component agrees with characteristic frequency of the analyzingwavelet (c), the highest signal to noise ratio causes the fundamentalness to be maximized. Whenthe frequency of a higher harmonic component agrees with the characteristic frequency of thewavelet, the fundamentalness is not very high because two or more harmonic components arelocated within the response area, because of the filter shape design.

    As a result, the fundamental frequency is obtained every 1 ms in a search range of 40 to 800Hz in [6]. For signal to noise ratios of 20 dB and higher the fundamental frequency is obtainedwith success.

    A fundamental frequency estimation (FFE) algorithm based on a least square fit is proposed

  • 18 CHAPTER 3. LITERATURE STUDY

    method step resp. trace accuracy range harmonics noise calc. timewavelet [11] +/- +/- + - + + +adaptive notch [8] - + +/- + - +/- +prony est.[4] - + +/- + + +/- +neural networks [12] + + - - +neural networks [13] + + + +least squares [7] + + + + +fundamentalness [6] + + + + + +/-Kalman filter [10] +/- +/- - + +

    Table 3.1: Comparison of methods

    in [7]. An error is calculated as a function of the frequency. This error shows a dip when thesinusoid fitted on a signal segment has the same frequency as a frequency component of thesignal segment.

    The least square fit has two crucial properties. One property is that the fundamental fre-quency shows the lowest squared error. The other defines the minimum width of the dip in thesquared error, referred to as troughs in [7], hence the error has not to be calculated for everyfrequency. The error is only calculated for analyzing frequencies sufficiently far apart from eachother, enhancing computational efficiency.

    The method successfully estimates frequencies in a range of 98 to 784 Hz samples, with asample length of less then 5 ms. The computation time is about 8 ms, measured on a 30 MIPS(million instructions per second) processor. A pentium 4 2.2 Ghz processor does about 4000MIPS, so with the current technology this should not be a problem.

    3.2 Comparison

    Based on the results presented in the articles studied, a comparison table is made. In this tableseveral criteria are judged. These criteria include:

    step response: fast estimation of a suddenly changing frequency

    tracing: fast estimation of a gradually changing frequency

    accuracy: accuracy of the estimated frequency with respect to the reference frequency

    range: frequency range of the estimation method

    higher harmonics: possible negative influence on the performance by disturbance from higherharmonics

    white noise: possible negative influence on the performance by disturbance from noise

    calculation time: for real-time frequency estimation the method should be fast

    It should be noted that comparing the different results from the different articles is difficultbecause they do not use the same test methods. Some methods are not tested at all criteriathroughout the articles. This makes the comparison a very rough one. The results from thecomparison are shown in Table 3.1.

    The conclusion, based on the comparison, is that the method based on fundamentalness [6] andthe FFE algorithm based on a least square fit are the most useful [7]. The FFE algorithm is aquite simple algorithm which is based on linear algebra, while the fundamentalness algorithm

  • 3.2. COMPARISON 19

    uses integrals and second order differentials. The fundamentalness needs advanced algorithmsfor integration, an adaptive Simpson rule by example. These algorithms need extra calculationtime. This needs to be done for all channels, in the frequency range in which the wavelet works.

    Because of simplicity and promising results from the paper, the fundamental frequency esti-mation (FFE) algorithm based on a least square fit is used for the real-time frequency estimationof the encoder signals.

  • 20 CHAPTER 3. LITERATURE STUDY

  • Chapter 4

    The least square algorithm

    In this chapter, the chosen Least Square Fit (LSF) algorithm, as described in [7], will be pre-sented. This algorithm is used in a somewhat changed form. This will be discussed also. Thenthere will be some notes on applicability of the algorithm. The algorithm is first written in aMatlab script and after that build in the simulink environment. The script and the simulinkblock scheme are included in appendices C, D and E.

    4.1 The least square algorithm

    The signal that is to be estimated from the discreet sawtooth signal segment coming from theencoder can be described by (4.1).

    x(n) = a sin(n) + b cos(n) with n = 1, 2, . . . , N 1, N (4.1)In this signal = 2pif/fs is the relative fundamental frequency. The fundamental frequency inHz is f and fs is the sampling frequency. This function has to be fitted on the real signal x(n).The parameters a and b determine the amplitude and phase of x(n), as they do in de Fouriertransform. The squared error is given by

    e =Nn=1

    (x(n) x(n))2 . (4.2)

    Eq. (4.2) is a function of a, b and and for variable minimal when

    e

    a= 2a

    Nn=1

    sin(n) sin(n) + 2bNn=1

    cos(n) sin(n) 2Nn=1

    x(n) sin(n) = 0 (4.3)

    a

    Nn=1

    sin(n) sin(n) + bNn=1

    cos(n) sin(n)Nn=1

    x(n) sin(n) = 0 (4.4)

    aP + bQ+W = 0 (4.5)

    and

    e

    b= 2a

    Nn=1

    sin(n) cos(n) + 2bNn=1

    cos(n) cos(n) 2Nn=1

    x(n) cos(n) = 0 (4.6)

    Nn=1

    sin(n) cos(n) + bNn=1

    cos(n) cos(n)Nn=1

    x(n) cos(n) = 0 (4.7)

    aQ+ bR+X = 0 (4.8)

    21

  • 22 CHAPTER 4. THE LEAST SQUARE ALGORITHM

    with

    P =Nn=1

    sin(n) sin(n) (4.9)

    Q =Nn=1

    cos(n) sin(n) (4.10)

    R =Nn=1

    sin(n) cos(n) (4.11)

    W = Nn=1

    x(n) sin(n) (4.12)

    X = Nn=1

    x(n) cos(n) . (4.13)

    The solution to this pair of equations, 4.5 and 4.8, is given by (see also appendix A):

    a =QX RWPRQ2 (4.14)

    andb =

    QW PXPRQ2 . (4.15)

    With a and b known, the estimated signal, x(n), is known and the squared error, e(), canbe calculated for each radial frequency . The result of such calculation is shown in Fig. 4.1.This is a calculation of the error of a 250 Hz signal.

    Figure 4.1: fitting error of a 250 Hz triangular wave

    Choi states in [7] that this error function, e(), has two important properties:

    1. Each significant trough in the function e() corresponds to a sinusoidal component of theestimated signal segment, x(n). The value of of the minimum point of a though is equalto the frequency of the corresponding component.

    2. The width of each significant trough in the function e() is at least 2pi/N in radial fre-quency. So this width is independent of the frequencies of the sinusoidal components ofthe input signal, provided that these are located sufficiently far apart from each other,that is their frequencies differ more than 2pi/N in radial frequency. This can be seen inFig. 4.1.

  • 4.2. ADAPTION 23

    In [7] the method is used to estimate the fundamental frequency of musical signals. The mostimportant frequency component, i.e. with the highest amplitude, is the fundamental frequencyand the higher harmonics are integer multiples of the fundamental frequency. This is also thecase for sawtooth like signals, but then only the odd multiples are contained in the spectrum.This makes the method as it is described in [7] useful for sawtooth like signals.

    In music signals the fundamental frequency, 0, must be higher then 2pi/N to avoid thatthe troughs interfere. For a sawtooth signal, the next trough will have a frequency twice asfar, because the even multiples fall out of the spectrum. The fundamental frequency of thesawtooth signal, 0, should therefore be higher then only pi/N . Otherwise the next trough ofe(), corresponding to the next higher harmonic, falls within the fundamental trough. Theminimum fundamental frequency, possible to estimate without interference of the troughs, inHz is given by (4.16).

    f0 fs2N (4.16)The fundamental frequency component will only show the lowest error if it has also the highestamplitude. An example for which this is nog true is shown in Fig. 4.2. In this figure a signalwith two sinusoids with a frequency of 250 Hz and 375 Hz respectively are added to each other.The fundamental frequency of this signal is 125 Hz, but the lowest error is shown for both the250 Hz and the 375 Hz component. These are not the fundamental frequencies.

    Figure 4.2: The frequency estimation for which the fundamental frequency has not the lowesterror

    The second property makes it necessary to compute e() only at values of evenly spaced2pi/(3N) apart. This makes sure that there will be three frequencies falling into a trough, wherethe middle one is the lowest. It should be noted that when there are more frequencies in therange tested, the accuracy becomes better. Furthermore the maximum width is proven in ap-pendix B.

    4.2 Adaption

    The algorithm as described above is adapted to work with frequencies in Hz instead of relativeradial frequencies. In [7] the number of intervals is calculated by m = max/(2pi/(3N)), choosingi = 2pii/(3N) with i = 1, 2, . . . ,m. Substituting max = 2pifmax/fs gives

    m = 3Nfmax/fs . (4.17)

    Note that the calculations are still done with relative frequencies, only the input is changed foruseability. The 3 in (4.17) is made a parameter, nppt (minimal Number of Points Per Trough),

  • 24 CHAPTER 4. THE LEAST SQUARE ALGORITHM

    so the accuracy can be enhanced when needed. Furthermore a small factor 1.2 is introduced tohave interpolation points after exactly 1 kHz. This leads to (4.18):

    m = 1.2npptNfmax/fs . (4.18)

    The error function is now a function of the relative frequency, f , instead of the relative radialfrequency, .

    When the sample used for the least square fitting procedure does not contain an integernumber of periods, the estimated frequency will not be the fundamental frequency. This happensbecause only a limited number of frequencies are tested. The lowest frequency in the trough willbe near the fundamental frequency though. To enhance the frequency estimation a parabolicinterpolation is used to interpolate between the lowest and the two neighboring points, also inthe trough. The parabolic interpolation is done by linear regression [14]. This leads to theparabolic coefficients y1, y2 and y3 of (4.22):

    e = y1f2 + y2f + y3 . (4.19)

    The minimum of this interpolated trough is given by

    de

    df= 2y1f + y2 = 0 (4.20)

    leading to

    f0 =y22y1

    . (4.21)

    4.3 Notes on Matlab implementation

    The algorithm has now two variables left. The sample length, N , needs to be optimized. For highfrequency signals, N should be low, such that there is not too much history in the estimation.When the signal contains a low fundamental frequency, N needs to be high to cover enough ofthe signal for a useful fit. If N is too high the matrices in the calculation become so large thatreal-time calculation is not possible anymore. The optimization of N is done by simulations inthe next chapter. Furthermore, the accuracy can still be enhanced by calculating setting nppt.

    To enhance the speed of calculation, most variables are calculated off-line. These variablesare needed in the algorithm. There are matrices s and c containing the sinusoids as a functionof and n. All values in de matrices which are a function of are saved horizontally, while allvalues related to time, n, are saved vertically in the matrices:

    s, c Rn . (4.22)

    The vectors P , Q and R are calculated and are horizontally saved, because they are a functionof only. Because the denominator in (4.14) and (4.15) is the same, a vector D = PR Q2 isintroduced.

    When the algorithm was written, most calculations were programmed in loops. The Matlablanguage is made to deal with matrices and calculations in loops are very slow. All loops aretherefore replaced by matrix calculations. Some vectors have to be expanded. This is donemultiplying a column of ones of the right dimensions with the row vector. Hence there are noloops in the algorithm, enhancing the calculation time and making it working under real-timeconditions.

    The resulting Matlab program is presented in appendix C.

  • Chapter 5

    Simulation

    There are two parameters left to determine. These are the sample length, N , and the numberof point per trough, nppt. These are determined by means of simulation. Also the interpolationmethod can be further refined.

    There will need to be made a trade off between accuracy and history by choosing the samplelength N , as described in Section 4.3. In the following sections, first the simulation setupis explained. Then the sample length is determined and the number of points per troughis optimized, to enhance the accuracy. Furthermore experiments are conducted with severalinterpolation methods.

    The algorithm is tested with the found settings and the results are presented in Section 5.3.In Section 5.4 the algorithm is tested on other kind of signals.

    5.1 Simulation setup

    The simulation experiments are done in Matlab. Six performance indicators are calculated inloops for several sample lengths and several signal frequencies. The signal used is an idealsawtooth signal, as shown in Fig. 2.2. These performance indicators are stored in a matrix.

    The mean estimated frequency, fm, and the standard deviation, , are calculated. The mea-surements are assumed to be distributed according the student-T distribution. There are done21 measurements, so the student T factor becomes 2.1. This means that a single measurementwill be within the range of the fm 2.1 with 95 % certainty [15].

    The performance indicators are:

    Mean frequency: The average frequency of the 21 measurements of estimated frequencies.

    Standard deviation: The standard deviation of the 21 measurements of estimated frequencies.This is a measure for the stochastic error.

    Absolute error: The difference between the mean estimated frequency and the frequency ofthe test signal. This is a measure for the systematic error.

    Relative error: The absolute error divided by the frequency of the test signal. This is also ameasure for the systematic error.

    Calculation Time: The mean calculation time. Although this is nog a representative value,it gives an indication of the calculation time. This value is not representative because themeasurements are done in a not ideal computational environment.

    Number of periods: The number of periods in the calculation is a measure for the signalhistory included in the measurement.

    25

  • 26 CHAPTER 5. SIMULATION

    When the sample length becomes too short for a low frequency signal, the frequency of thissignal cannot be estimated. The test frequency with the lowest error is then the first point ofe(f). There is no point before this minimum point to conduct a parabolic interpolation with.When this occurs, the element for the current sample length and signal frequency of a checkmatrix, which is initially 1 for all tested sample lengths and frequencies, is set to infinity. Afterall measurements, the performance indicator matrices are divided by this matrix, so the nottestable elements become zero. Therefore, at low frequencies with low sample lengths, there arezeros in the performance indicator matrices.

    The input ideal sawtooth signals do have some noise added. Because the real encoder signalsdo not contain so much noise, there is only -40 dB noise added. The amplitude of the sawtoothsignal is 1.

    5.2 Determination of the sample length and accuracy

    The first set of simulations is done with a wide range of frequencies and sample lengths. Thenumber of points per trough is set to three, as is done in [7].

    The frequencies for which the performance indicators are calculated are Fx =[50 75 100 150 200 300 400 500 750 1000

    ].

    These frequencies are all within the range of 50 to 1000 Hz. This frequency vector is biasedto lower frequencies, because the method tents to perform worse at lower frequencies. This isbecause there are fewer periods in these low frequency samples. The frequency is kept constantduring a test.

    The sample lengths tested are N =[10 20 30 40 50 75 100 150 200 400

    ].

    This sample length vector is biased to the lower sample lengths, because these are most welcome.A lower sample length excludes more history; hence the frequency estimation will be better witha changing fundamental frequency. The calculation time becomes shorter with low signal lengths.

    Fig. 5.1 shows the systematic error for the different combinations of sample length and sig-nal frequency. From this figure it is clear that when the number of periods in the sample is aninteger, the systematic error is high. This is caused by the interpolation. As indicated before,the interpolation is needed when the signal frequency is not equal to a tested frequency.

    Figure 5.1: relative error for 3 points per trough

    The systematic error should be lower and this is done by choosing more frequencies within therange: the minimum number of points per trough is set to five. Now another option becomes

  • 5.2. DETERMINATION OF THE SAMPLE LENGTH AND ACCURACY 27

    available: with a minimum of five points per trough it is possible to estimate the fundamentalfrequency by an interpolation with two points a each side of the minimum. A parabolic func-tion is then least square fitted on five points. The systematic error increases though, when theinterpolation is done on five points.

    Figure 5.2: relative error for 5 points per trough

    In appendix G the performance is listed in tables for a five points per trough and three inter-polation points setting. Fig. 5.2 shows the systematic error for these settings. The systematicerror is now below 3.5 % at a sample length of 125. This seems to be a sample length for whichthe accuracy is alright and the performance is further analyzed around this sample length value.The sample length is changed from 80 to 180 in steps of 5. The signal frequency is changed from25 to 1100 in steps of 25. In Fig. 5.3 and Fig. 5.4 are the systematic and stochastic error shownrespectively.

    From these figures the a sample length of 125 seems to a reasonable trade off betweenaccuracy and history. Also, the matrices are kept small enough to obtain a high calculationspeed. The history of signal needed for the frequency estimation is then:

    t =N

    fs=

    12510000

    = 0.0125 s. (5.1)

    At a low signal frequency of 50 Hz, this means that there are 0.625 periods involved. At a highsignal frequency of 1000 Hz, this means there are 12.5 periods involved.

    The number of periods is calculated by (5.2), with fx the signals fundamental frequency.

    per =N

    fs fx (5.2)

    Figure 5.3: relative error for five points per trough around N = 125

  • 28 CHAPTER 5. SIMULATION

    Figure 5.4: stochastic error for five points per trough around N = 125

    By these simulations, a value of 125 is obtained for the sample length, N . Furthermorethe accuracy is enhanced by choosing a minimum of five points per trough. The interpolationmethod is a parabolic interpolation on three points. According to (4.16) the minimum frequencyto be estimated without the troughs interfering is 40 Hz, so f0 40 Hz. Note that interferencefor a sawtooth signal does not have so much influence, since the higher harmonics do not havea high amplitude.

    The simulations have also shown a calculation time of 0.6 ms. This is a measure under badconditions, in which Matlab does not only calculate the frequency but is also doing the loopmaintenance. These loops are only introduced for recording the simulation results. The 0.6 mscalculation time only shows that the 125 sample long signal segments are not too long, withtoo big matrices as a consequence. There are several options to decrease the calculation time.These options include:

    A faster computer: These tests are done on a 1.86 Ghz Pentium M processor. There arefaster processors available. Tests on a 3.2 Ghz Pentium 4 processor showed a calculationtime of 0.4 0.1 ms calculation times.

    A different operating system: These tests were done in Windows. Calculation in by exam-ple Linux will have shorter calculation times.

    C code: Reprogramming the algorithm in Matlab embedded C-code should decrease the cal-culation time significantly.

    There is just 12.5 ms of signal history used for the frequency estimation. This becomes impor-tant for a changing velocity of the encoder, resulting in a changing frequency.

    5.3 Validation

    The algorithm is now complete and optimized. For validation, the relative error and standarddeviation, respectively measures for systematic and stochastic error, are analyzed. A wide rangeof input frequencies is tested. The input frequencies are biased to the lower frequencies, becausethe the performance is lower there. There are done 21 measurements per frequency, as describedin Section 5.1. There was added -40 dB of white noise to the signal, to simulate a realisticencoder signal. The results are shown in Fig. 5.5 and Fig. 5.6.

    At low frequencies the relative error peeks to just under 2 % but at higher frequencies, therelative error is very low. The maximum of the stochastic error is about 0.1 Hz, independent ofthe signal frequency. This means that the frequency estimated will be within the range of thefm2.1 = fm0.21 Hz with 95 % certainty. This can become important for low frequencies,where 0.2 Hz is relative much. At low frequencies, the standard deviation tends to be a bit

  • 5.4. OTHER SIGNALS 29

    lower, though, as can be seen in Fig. 5.6.

    Figure 5.5: relative error for final algorithm

    Figure 5.6: stochastic error for final algorithm

    The accuracy can be further enhanced by adding more points per trough. The test fre-quencies are completely free to choose in this algorithm, as long as there are enough to find aminimum, thus at least three points per trough. The more frequencies are chosen, the longer thecalculation will take though. The frequencies can even be unevenly spaced, so there are morepoints at lower frequencies, then there are at high frequencies, where the error tends to be lower.This would improve the performance at low frequencies, without adding much calculation time.When computers become faster and the program language becomes more efficient, the accuracycan be further enhanced. Both the systematic and the stochastic error would become lower.

    5.4 Other signals

    In this section the fundamental frequency estimator will be used to estimate the fundamentalfrequency of other then sawtooth like signals. These signals satisfy the requirement that thefundamental frequency is the frequency component with the highest amplitude.

    5.4.1 A simple sinus

    The first signal that is tested is a simple 250 Hz sinus. This will probably work, because thesawtooth like signals are sinusoids with odd higher harmonics. The result is shown in Fig. 5.7.The error plot shows a trough at 250 Hz.

  • 30 CHAPTER 5. SIMULATION

    Figure 5.7: 250 Hz sinus signal and error plot

    5.4.2 A square signal

    A square signal has the same higher harmonics as the triangle wave: only the odd integer multi-ples of the fundamental frequency. These higher harmonics are present with a higher amplitudethough. The amplitude for the higher harmonics is proportional to inverse of the harmonicnumber. The error plot, Fig. 5.8 shows this stronger harmonic influences by the small dip at750 Hz.

    Figure 5.8: 250 Hz square signal and error plot

    5.4.3 A saw wave

    The saw wave includes both the even and the odd multiples of the fundamental frequency. Thissignal comes close to the signals analyzed in [7]. The error plot in Fig. 5.9 shows a large troughat 250 Hz, the fundamental frequency, but also small dips at the higher harmonics, at 500 Hzand 750 Hz.

  • 5.4. OTHER SIGNALS 31

    Figure 5.9: 250 Hz sawwave signal and error plot

    5.4.4 A triangle wave with offset

    An offset of 1 is applied to a 250 Hz triangle wave. This offset introduces a trough at a very lowfrequency. The 250 Hz dip can still be seen in the error function, but the minimum is at thevery low frequency, which comes from the offset. The result is shown in Fig. 5.10. The methodis not useful for this kind of signal. It may be useful when the offset is filtered out first, like isdone for the SinCos encoder signals by the Heydemann model [1].

    Figure 5.10: 250 Hz sawtooth signal with offset and error plot

  • 32 CHAPTER 5. SIMULATION

  • Chapter 6

    Comparison with a wavelettransform method

    A continuous wavelet transform (CWT) algorithm that was developed at the Eindhoven Uni-versity of Technology is used to find the fundamental frequency of a sawtooth like signal. Thismethod is compared to the least square fitting method (LSF), presented in this report. Thesimulations are done in a Simulink environment. The CWT algorithm takes the frequency withthe highest amplitude in the spectrum and outputs that as the fundamental frequency.

    6.1 The simulation

    First a sawtooth signal sample of 10 seconds with a changing frequency from 50 to 400 Hz ismade. Hence the input frequency is known. Two simulations are done, one without and onewith noise, for both methods. The noise is added as a random number with a variance of 0.01and an average of 0.

    Figure 6.1: least square method: a. without noise, b. with noise

    33

  • 34 CHAPTER 6. COMPARISON WITH A WAVELET TRANSFORM METHOD

    6.2 The results

    The frequency of this signal is then estimated by both the CWT algorithm and the LSF algo-rithm. The results are shown in Fig. 6.1 and Fig. 6.2.

    There are two sources for errors. First there is the error on the frequency estimation. Forhigh frequencies, the frequency resolution for the CWT algorithm is lower and the frequencyestimation gets worse. The frequency estimation at low frequencies is less accurate for the LSFalgorithm. This is seen by the high error in Fig. 6.1 at low frequencies.

    Furthermore there is the delay. The low time resolution of the CWT algorithm at low fre-quencies introduces an error, because changes in the fundamental frequency are seen quite late.The LSF algorithm uses only 0.625 period for the frequency estimation at a frequency of 50 Hz.Therefore, there is less delay, causing less error. The time-frequency resolutions of the CWTmethod are illustrated in Fig. 3.2.

    The CWT shows a startup error. This is probably because there is no full low frequency period,so the highest amplitude found is a high frequency one. The spectrum is thus not complete yetat that moment.

    Figure 6.2: wavelet transform method: a. without noise, b. with noise

    6.3 Conclusion

    The results of these simulations show that the LSF method shows a higher stochastic errorthen the CWT method for low frequency frequencies but a lower stochastic error for higherfrequencies.

    Under noisy conditions the stochastic error is higher and the estimations get worse. Thewavelet method is less effected by noise. At higher frequencies, the stochastic error of the CWTmethod becomes higher. This is because of the lower frequency resolution at high frequencies.This frequency resolution is illustrated in Fig. 3.2.

    On the other hand, the CWT shows more delay: the estimated frequency is behind thecurrent frequency, especially at low frequencies. This introduces a systematic error, because

  • 6.3. CONCLUSION 35

    the time resolution is lower. The LSF method uses a smaller sample size, just 0.625 period, sothe estimation suffers less delay. This enhances the accuracy, compared to the CWT algorithm,when the fundamental frequency of the signal is changing.

  • 36 CHAPTER 6. COMPARISON WITH A WAVELET TRANSFORM METHOD

  • Chapter 7

    Implementation

    The frequency estimation algorithm is implemented on the measurement setup to estimate thefrequency of the encoder output in real-time. In this chapter the results will be discussed. Firstthe algorithm is adapted to the current measurement setup.

    7.1 The algorithm and the setup

    Because the current measurement setup, as it is described in Section 2.1, only measures at asampling rate of 1 kHz, the algorithm is changed. Because the sampling frequency is a factor10 lower, the maximum frequency to be estimated is also lowered a factor 10: fmax = 100 Hz.The minimum frequency to estimate is assumed to be 10 Hz. The sample length, N , is set to80. The number of periods at 10 Hz in the sample is then just 0.8 and at 100 Hz the numberof periods is 8. The lenght of the sample is 80 ms. This is quite long but there are less datapoints per unit of time then at a sampling frequency of 10 kHz, so the sample length will needto be longer to ensure a high accuracy. That is why 0.8 period is used instead of 0.6. Otherwisethe accuracy would be too low. For higher low frequency accuracy, the minimum number ofpoints per trough, nppt, is increased to 9. The changes to the algorithm settings are includedin appendix F.

    The frequency estimation algorithm is connected to the analog output of the encoder inSimulink. The Simulink block scheme is used, as it is provided in appendix D. The parabolicinterpolation subsystem is provided in appendix E.

    The reference signal for the feedback control of the encoder disc is shown in Fig. 7.1. Boththe position and the velocity are shown. The velocity is what influences the frequency, while thedesired position is the reference for the feedback controller. The velocity is first constant, thenthe encoder speeds up and kept at constant speed until it is slowed down again. The velocityis then constant again, slows down until the encoder turns around and speeds up in the otherdirection. When the encoder rotates the other way around, the velocity is kept constant again.This reference shows thus a constant velocity, a changing velocity and even a turn around ofthe encoder disc. That is where the frequency estimation is expected to fail, because when theencoder does not turn, there is no frequency content in the signal.

    The velocity is not known because the input of the controller design of the system is unknown.The velocity is therefore tuned to give a minimum frequency of about 10 Hz and a maximumfrequency of 100 Hz. The profile is shown in Fig. 7.1.

    Note that the frequency estimation will be done on the uncorrected waveforms. The Heyde-mann correction is not implemented yet. So the offset may distort the estimations.

    37

  • 38 CHAPTER 7. IMPLEMENTATION

    Figure 7.1: position and velocity input

    7.2 The results

    The output of the real-time Simulink scheme is the estimated frequency. This frequency esti-mation of the least square fit (LSF) is saved. Additionally the waveform and the reference aresaved. The fundamental frequency of the waveform is then oine estimated by the continuouswavelet transform (CWT) method also for comparison. The results of the frequency estimationare shown in Fig. 7.2. The absolute value of the velocity of the reference is also shown in thisfigure. The frequency profile should follow the velocity profile.

    Figure 7.2: implemented frequency estimation results

    The reference is not followed exactly. There are several reasons for this, including:

    Control error: The controller does make an error. The velocity profile is thus not followedexactly.

    Coulomb friction: When the flywheel turns around, it stops for some time. Because it doesnot start moving immediately, as the reference does, this introduces an error.

    Resonance: A bit of resonating sound was heard at high velocities. Resonance increases theerror. This is probably because the PD controller blows up the high frequencies in thesteps of the encoder counts.

    No Heydemann correction: The waveform is not the sawtooth like signal, because the Hey-demann correction was not available yet. Especially the offset makes the frequency esti-mation by the proposed least square fitting difficult. The wavelet transform will have lessproblems with this.

  • 7.3. CONCLUSION 39

    From the results can be seen that, at high frequencies, the reference is not followed completely.This is probably because of the maximum speed of the motor. When the encoder is turnedaround, at the end of the measurement (negative velocity), the frequency is expected to havethe same value as before the turn around. The frequency is actually lower. This was also seenin the experiment, when the flywheel, after the turn around, rotated very slowly. This can bebecause of friction effects or a bad friction feed forward.

    This low frequency at the end is very badly estimated by the LSF method. The offset of thewaveform at this low frequency begins to become so important, that the minimum error can befound lower then the fundamental frequency of the waveform. The wavelet transform methodshows a far better low frequency performance.

    Both the wavelet transform and the least square fit show the same peeks around the referencein the frequency estimation. The wavelet transform shows those only a bit later, because of thelower time resolution. It uses a longer sample to measure the frequency from. The LSF showslarger peeks around the reference, probably because of the higher stochastic error.

    When the encoder turns around, the frequency becomes very low. Both the CWT andLSF method cannot estimate the frequency anymore. The CWT method, with its better lowfrequency performance, does this better then the LSF method, but also looses track of the fre-quency. Note that, when the velocity becomes zero, there is no frequency content anymore, sothe frequency of the waveform cannot be estimated. Furthermore, the choice of the frequenciesat which the least square fit is evaluated can enhance the low frequency performance.

    7.3 Conclusion

    To reconstruct the desired sinusoid from the measurement data, there are lots of errors tocompensate for. The Heydemann model corrects the offset, amplitude and phase errors in thesignal. Without this correction, the CWT shows far better results then the LSF method. Thelow frequency performance of the CWT is also better. The LSF method shows a changingfrequency earlier then the CWT at low frequencies. This is because the low time resolution atlow frequencies. This can be seen in Fig. 3.2 and becomes important when there are changingfrequencies in the system. For an encoder this means a changing velocity.

  • 40 CHAPTER 7. IMPLEMENTATION

  • Chapter 8

    Conclusion and recommendations

    8.1 Conclusion

    To enhance the position measurements of a SinCos encoder, the analog waveforms can be used,instead of the zero transition data, which is usually done. When the fundamental frequency ofthese signals is known, the ideal sinusoid can be reconstructed.

    The frequency of the signals is assumed to be in the range of 50 to 1000 Hz. The samplingfrequency needs to be at least 2 kHz, because of the Shannon theorem. In practice a samplingfrequency of 10 kHz is chosen.

    Several real-time frequency estimation algorithms are considered. Most useful were a wavelettransformation method, based on the concept of fundamentalness, and a least square fittingalgorithm. This is because they estimate the frequency in a great range and do this with a highaccuracy in the presents of higher harmonics.

    The fundamental frequency estimation algorithm, based on a least square fit, is presentedin this report. An interpolation, based on linear regression, is added to enhance the accuracy.Furthermore, the algorithm is adapted to frequencies in Hz, instead of relative frequencies. Thealgorithm was written in Matlab code. For real-time implementation the algorithm was alsobuild in a Simulink environment. Calculation times where found to be 0.4 0.1 ms on a Pen-tium 4 3.2 Ghz processor in a matlab environment.

    The algorithm was optimized for a reasonable accuracy and the calculation time is kept low,to ensure real-time operation. The sample length on which the frequency estimation is donewas found to be 125 samples. The minimum number of points per trough in the error functionis 5. Then the accuracy is on a reasonable level. This is the case for the systematic and thestochastic error. At 52 Hz the relative error peeks to 2 %, but for higher frequencies, the erroris well below 1 %. The standard deviation is well below 0.1 Hz for the whole frequency rangeof 50 to 1000 Hz. The minimum fundamental frequency to be estimated without the troughsinterfering is 40 Hz, which is well below the minimum assumed frequency of 50 Hz.

    The algorithm is also tested for other signals, and the fundamental frequency was success-fully estimated for a sinusoidal, square and a saw wave signal. When the signal has an offset,the fundamental frequency could not be estimated.

    The least square algorithm was compared to a standard continuous wavelet transform algo-rithm. The stochastic error performance of the least square algorithm was found to be worsethen that of the continuous wavelet transform. The continuous wavelet transform, on the otherhand, showed a bigger systematic error, especially at low frequencies. This is due to the factthat the wavelet transform uses a full period of the signal at low frequencies, while the leastsquare fitting algorithm uses only 0.625 period. This delay, resulting from a low time resolution

    41

  • 42 CHAPTER 8. CONCLUSION AND RECOMMENDATIONS

    at low frequencies, results in an error, when the frequency to be estimated is changing.These results were confirmed by the implementation in the measurement setup. The wavelet

    transform shows a more robust low frequency estimation and has less problems with an offseterror.

    8.2 Recommendations

    The speed of calculation can be further enhanced by rewriting the algorithm in embedded C-code. The algorithm can then be used more efficiently in a by example a Simulink environment.

    As computers become faster or the algorithm gets implemented more efficient, the numberof points in the error function, e(f), can be further increased. The points can even be spacednon-linear. More points in a low frequency range, would greatly improve the accuracy in thisregion, where the relative error is quite high now. The stochastic error would also decreasesignificantly. That is most welcome, since the 1 Hz stochastic error has far more influence on a50 Hz sample, then it has on a 1000 Hz sample.

    If it can be done fast enough, the sample length, N , in the LSF method could me madeadaptive. When the sample length changes, also some matrices change. These need to be re-calculated. When computers are fast enough and the program is efficient enough, it should bepossible to update these matrices online. Then a good frequency measurement can be foundwith only 0.6 period for both high and low frequency signals.

    When the velocity goes to zero, by example when the encoder turns around, the frequencycannot be estimated anymore by the least square fitting algorithm. In a complete standstillthere is no frequency content actually. It should therefore be noted, that the position measure-ment with the analog SinCos encoder output can only be enhanced when the encoder is moving.The least square fitting algorithm, as it was designed for the specifications, only works fine,when the frequency is above 50 Hz. This low frequency malfunction may be improved by addingmore points in the error function at low frequencies.

    The position in a standstill may be found by extrapolation of the previous positions. Theindication of low frequency malfunction may be that the minimum error is very high. This canbe seen in Fig. 5.10. A bound on the error could be investigated to have this indication of a toolow frequency.

    If a real-time frequency estimation algorithm is needed and both a continuous wavelet transformand a least square fitting algorithm are considered, the choice should be depending of the kindof signal that has to be analyzed. If the frequency is changing fast, the delay time, which islonger for a wavelet transform at low frequencies, could cause significant errors at low frequen-cies. These errors will be higher for the wavelet transform. The CWT method shows better lowfrequency performance, although the LSF could be optimized for these low frequency signals.

  • Bibliography

    [1] P. L. M. Heydemann. Determination and correction of quadrature fringe measurement errorsin interferometers. Applied Optics, 20(19):3382-3384, October 1981.

    [2] Wikipedia, the free encyclopedia. Triangle wave.URL: http://en.wikipedia.org/wiki/Triangle wave, May 2008.

    [3] J.J. Kok and M.J.G. van de Molengraft. Signaal Analyse. Technical report, Eindhoven Uni-versity of Technology, Department of Mechanical Engineering, 2003.

    [4] T. Lobos and J. Rezmer. Real-Time Determination of Power System Frequency. IEEE Trans-actions on Intrumentation and Measurement, vol. 46, no. 4, pages 877881, August 1997.

    [5] S. Karlsson, J. Yu and M. Akay. Time-Frequency Analysis of Myoelectric Signals During Dy-namic Contractions: A Comparative Study. IEEE Transactions on Biomedical Engineering,vol. 47, no. 2, pages 228238, February 2000.

    [6] H. Kawahara, I. Mauda-Katsuse and A. de Cheveigne Restructuring speech representationsusing a pitch-adaptive time-frequency smoothing and an instantaneous-frequency-based F0extraction: Possible role of a repetive structure in sounds. Speech Communication 27 (1999)187-207, Elsevier Science, pages 187207, 1999

    [7] A. Choi Real-Time Fundamental Frequency Estimation by Least-Square Fitting. IEEE Trans-actions on Speech and Audio Processing, vol. 5, no. 2, pages 201205, March 2000.

    [8] S. Kim and Y. Park Active Control of Multi-Tonal Noise with Reference Generator Basedon On-line Frequency Estimation. Journal of Sound and Vibration, pages 647666, 1999

    [9] R. Merry Wavelet theory and Applications, A literature study. Eindhoven University of Tech-nology, Department of Mechanical Engineering, DCT nr. 2005.53, June 7, 2005.

    [10] A. Routray, A. Kumar Pradhan and K. Prahallad Rao A Novel Kalman Filter for FrequencyEstimation of Distorted Signals in Power Systems. IEEE Transactions on Intrumentation andMeasurement, vol. 51, no. 3, pages 469479, June 2002.

    [11] T. lin, M. Tsuji and E. Yamada A Wavelet Approach to Real Time Estimation of PowerSystem Frequency. SICE, pages 5865, July 2001.

    [12] A. Cichocki and T. Lobos Artificial Neural Networks for Real-Time Estimation of BasicWaveforms of Voltages and Currents. IEEE Power Industry Computer Application Confer-ence, pages 357363, May 1993.

    [13] L.L. Lai, C.T. Tse, W.L. Chan en A.T.P. So Real-Time Frequency and Harmonic Evaluationusing Artificial Neural Networks. IEEE Transactions on Power Delivery, vol. 14, no. 1, pages5259, January 1999.

    [14] B. Kolman and D.R. Hill Elementary Linear Algebra Pearson Education, Inc. eight edition,2004, pages 276281. ISBN 0-13-121933-2

    43

  • 44 BIBLIOGRAPHY

    [15] F.L.M. Delbressine, P.H.J. Schellekens, H. Haitjema and F.G.A. Homburg Metrologie voorW Technical report, Eindhoven University of Technology, Department of Mechanical Engi-neering, 2006

  • Appendix A

    Solution for a and b

    aP + bQ+W = 0 (A.1)

    aQ+ bR+X = 0 (A.2)

    Solve for a

    aP + bQ+W = 0 (A.3)

    aQ2

    R+ bQ+

    XQ

    R= 0 (A.4)

    Subtracting (A.4) from (A.3)

    aP +W aQ2

    R XQ

    R= 0 (A.5)

    a(P Q2

    R) =

    XQ

    RW (A.6)

    a =XQWRPRQ2 (A.7)

    Solve for b

    aQ+bQ2

    P+WQ

    P= 0 (A.8)

    aQ+ bR+X = 0 (A.9)

    Subtracting (A.8) from (A.9)

    bR+X bQ2

    P WQ

    P= 0 (A.10)

    b(R Q2

    P) =

    WQ

    PX (A.11)

    b =WQXPPRQ2 (A.12)

    45

  • 46 APPENDIX A. SOLUTION FOR A AND B

  • Appendix B

    The width of a trough

    The Fourier series of a arbitrary signal is

    x(t) =k=0

    [ak cos(k0t) + bk sin(k0t] (B.1)

    with 0 the minimal frequency measurable in radians per second, provided that the mea-surement contains exactly one period or an integer number of full periods of this frequencycomponent.

    0 =2piT0

    (B.2)

    From this the fundamental frequency in [Hz] can be calculated by (B.3), which is also equalto the sampling frequency divided by the sample length.

    f0 =1T0

    =fsN

    (B.3)

    So the spectrum, (B.1), is made by the frequencies of the infinite series

    f0, 2 f0, 3 f0, . . . . (B.4)

    The error function can only go to zero for a frequency in this series. Only these frequenciesare in the signal after all. These frequencies are spaced a distance of f0 from each other. Thatmeans that a trough can be formed only between f f0. The relative frequency of this spacingis

    f0,rel =f0fs

    =1N

    (B.5)

    or in radial frequencies

    0,rel =2piN

    . (B.6)

    The maximum width of the trough is 2 f0 = 2N .

    47

  • 48 APPENDIX B. THE WIDTH OF A TROUGH

    Suppose a sinusoidal signal with only one frequency in the spectrum, then the fitting erroris low around this frequency and high for all other frequencies. The error gets lower more nearthe frequency of the signal. The next multiplie of f0 cannot give a response, because the signalis entirely different. This is shown in Fig. B.1. In this figure the error for a 250 Hz signal isshown, calculated with a signal length of N = 125 and a sampling frequency, fs = 10 kHz. Thatmakes f0 = 80 Hz. The trough goes from approximately 170 Hz to 330 Hz, which is 160 Hz.

    Figure B.1: error for a 250 Hz signal

  • Appendix C

    The frequency estimation program

    % % % % % Least square algorithm for estimating the fundamental frequency% % % % % author: P.J.H. Maas% % % % % date: 16th of april 2008% % % % % version: 1.0% % % % % version history% version 0.1: working with forloops, plotting w-e% version 0.2: working with frequencies instead of relative frequencies% version 0.3: matrix calculations instead of forloops for speed% version 0.4: noise added% version 0.5: parabolic interpolation% version 0.6: range and sampling frequency specified% version 0.7: replaced repmat functions by matrix calculations% version 0.8: making nppt a accuracy variable% version 1.0: final

    % clean upclcclose allclear all

    % generate sawtooth signal with frequency fx and sampling frequency fsfx = 250; % sawtooth frequencyfs = 10*10^3; % sampling frequencyfmax = fs/10; % max frequency to be estimatednppt = 5; % number of samples per flank of the troughN = 125; % number of used samples (1 periode: fs/fx)

    % introducing a test signalt = 0:1/fs:1;% x includes a little bit of noise as the real signals arent that noisyx = sawtooth(2*pi*fx.*t+1/2*4/7*pi,1/2) + wgn(length(t),1,-40);

    % algorithm parameters% sample numbers (time)n = (1:N);% range made (factor 2 when N is adaptive, a small factor >1 for other N)m = round(1.2*fmax/fs/(1/(nppt*N)));

    49

  • 50 APPENDIX C. THE FREQUENCY ESTIMATION PROGRAM

    mi = 1:m;f = mi/(nppt*N); % relative frequency (f/fs) vectorM = length(f);

    % making repmat matricesoneN = ones(N,1);oneM = ones(1,M);

    % making vectors offline, preallocating memoryW = zeros(1,M);X = zeros(1,M);A = zeros(N,M);B = zeros(N,M);xest = zeros(N,M);xM = zeros(N,M);e = zeros(1,M);

    % frequency estimation

    % make sin and cos vector for all omega, used as database matrices (N x M)s = sin(2*pi*n*f);c = cos(2*pi*n*f);

    % make P,Q,R as specified in the algorithm, (1 x M)P = sum(s.*s,1);Q = sum(c.*s,1);R = sum(c.*c,1);D = P.*R-Q.*Q;

    % processor time in (stopwatch start); after this functions of x are% calculatedtic;

    % make W and X vectors as specified in the algorithm, (1 x M)W = -x(1:N)*s;X = -x(1:N)*c;

    % make a and b vectors as specified in the algorithm, (1 x M)a = (Q.*X-R.*W)./(P.*R-Q.*Q);b = (Q.*W-R.*X)./(P.*R-Q.*Q);% transform a and b into matrices to use matrix calculationsA = oneN*a;B = oneN*b;

    % calculate error vector e, (1 x M) (as a function of f)xest = A.*s + B.*c;xM = x(1:N)*oneM;e = sum( (xest-xM).^2 ,1 );

    % parabolic interpolation

  • 51

    i = ( find(e==min(e)) );

    % 3 points for interpolatione_para = [e(i-1),e(i),e(i+1)];f_para = [f(i-1),f(i),f(i+1)];

    % parabolic linear regressionA_para = [f_para.^2,f_para.^1,f_para.^0];y = A_para\e_para;

    f_est_para = -y(2)/2/y(1);

    % estimated frequencyfx_est_para = f_est_para*fs;

    % output%processor time out% display time is not representable because it is only one measurementtt=toc;

    figureplot(t,x)title(signal: sawtooth with noise)xlabel(time (s))xlim([0,10*1/fx])

    figuresemilogx(f*fs,e,-x)title(frequency plot)xlabel(frequency (Hz))ylabel(error)xlim([fmax/25,fmax*1.2])

    hold onf_para_plot=f(i-1):.1/fs:f(i+1);e_para=polyval(y,f_para_plot);semilogx(f_para_plot*fs,e_para,r)

    legend(calculated error,polynomial fit)

    % periodsp = N/fs*fx% number of samplesN% estimated frequencyfx_est_para

  • 52 APPENDIX C. THE FREQUENCY ESTIMATION PROGRAM

  • Appendix D

    The Simulink block scheme

    Ou

    t11

    tran

    spo

    se x

    uT

    ff

    Tra

    nsp

    ose

    o

    ne

    N

    uT

    Tra

    nsp

    ose

    o

    ne

    M

    uT

    Tapp

    ed

    De

    lay

    12

    5D

    ela

    ys

    Subs

    yste

    mPa

    rabo

    licIn

    terp

    ola

    tion

    f efx

    _e

    stSq

    ua

    re e

    rro

    r

    Sin

    us

    s

    Re

    pma

    t xM

    on

    eM

    Re

    pma

    t B

    Ma

    trix

    Mu

    ltipl

    y

    Re

    pma

    t A,B

    on

    eN

    Re

    pma

    t A

    Ma

    trix

    Mu

    ltipl

    y

    Re

    pma

    t

    Ma

    trix

    Mu

    ltipl

    y

    Ra

    te Tr

    an

    sitio

    n

    R.*X

    R.*W

    R m

    atri

    x

    R

    Q.*X-

    R.*W

    Q.*X

    Q.*W

    -R

    .*X

    Q.*W

    Q m

    atri

    x

    Q

    Ga

    in

    -1

    D m

    atri

    x

    D

    Cosi

    nu

    s

    c

    Calc

    xe

    st

    Calc

    e

    rro

    r

    Calc

    b

    Calc

    a

    Calc

    X2

    Ma

    trix

    Mu

    ltipl

    y

    Calc

    X

    Ma

    trix

    Mu

    ltipl

    y

    Calc

    W

    Ma

    trix

    Mu

    ltipl

    y

    B.*c

    A.*s

    Xa b

    W

    BA

    xMxest

    x

    53

  • 54 APPENDIX D. THE SIMULINK BLOCK SCHEME

  • Appendix E

    The interpolation subsystem

    fx_est

    1

    eSe

    lect

    or

    Sele

    ctR

    ow

    s

    In1

    Out1

    Idx

    Transp

    ose

    e

    uT

    Square

    f

    Sele

    ctor

    y(2)

    Sele

    ctor

    y(1)

    SVD

    So

    lver

    (least

    sq

    uare

    s fit

    )

    A BX

    AX=

    B (S

    VD)

    [m x

    n]

    Para

    bolic

    Inte

    rpola

    tion

    matri

    x11

    Para

    bolic

    Inte

    rpola

    tion

    matri

    x2

    Ones

    Min

    imumIdx

    Inde

    x -1

    u-1

    Inde

    x +

    1

    u+

    1

    Gain

    fs

    fsFr

    equ

    ency

    Sele

    ctor

    Sele

    ctR

    ow

    s

    In1

    Out1

    Idx

    Diff

    1D

    iff 2

    -1/

    2

    e2f1

    55

  • 56 APPENDIX E. THE INTERPOLATION SUBSYSTEM

  • Appendix F

    The frequency estimation programat 1 kHz sampling frequency

    Only the settings part is shown, the rest of the algorithm is the same.

    % % % % % Least square algorithm for estimating the fundamental frequency% % % % % author: P.J.H. Maas% % % % % date: 16th of april 2008% % % % % version: 1.0% % % % % version histo% version 0.1: working with forloops, plotting w-e% version 0.2: working with frequencies instead of relative frequencies% version 0.3: matrix calculations instead of forloops for speed% version 0.4: noise added% version 0.5: parabolic interpolation% version 0.6: range and sampling frequency specified% version 0.7: replaced repmat functions by matrix calculations% version 0.8: making nppt a accuracy variable% version 1.0: final

    % clean upclcclose allclear all

    % generate sawtooth signal with frequency fx and sampling frequency fsfx = 100; % sawtooth frequencyfs = 1*10^3; % sampling frequencyfmax = fs/10; % max frequency to be estimatednppt = 9; % number of samples per flank of the troughN = 80; % number of used samples (1 periode: fs/fx)

    57

  • 58APPENDIX F. THE FREQUENCY ESTIMATION PROGRAMAT 1 KHZ SAMPLING FREQUENCY

  • Appendix G

    Simulation results

    The following tables show the results for the simulations with a minimum of five points pertrough and a parabolic interpolation over three points. The empty cells in the table representan error. There was no interpolation point. This happens at low frequencies, with a lowsample length.

    Input frequency (Hz)N 50 75 100 150 200 300 400 500 750 1000400 49.83 74.88 99.91 149.94 199.96 299.97 399.98 499.99 749.99 999.99200 49.49 75.35 99.68 149.77 199.82 299.88 399.91 499.94 749.95 999.95150 51.67 75.21 100.44 149.96 199.71 300.17 399.85 500.11 749.87 999.93100 52.00 77.46 99.01 150.64 199.37 299.54 399.65 499.73 750.16 999.8475 75.89 103.19 150.43 200.85 299.85 399.40 500.50 749.47 1000.0750 103.67 154.59 198.00 301.23 398.74 500.74 750.64 999.3740 149.82 202.87 299.97 398.27 498.50 748.78 999.1830 199.29 302.00 399.67 502.01 748.93 998.0020 297.92 405.46 495.45 752.24 995.6610 511.30 763.03

    Table G.1: mean estimated frequency (Hz)

    Input frequency (Hz)N 50 75 100 150 200 300 400 500 750 1000400 0.011 0.010 0.011 0.011 0.009 0.012 0.012 0.010 0.008 0.014200 0.017 0.027 0.039 0.027 0.026 0.045 0.029 0.033 0.024 0.034150 0.049 0.052 0.041 0.062 0.064 0.050 0.049 0.061 0.047 0.065100 0.075 0.083 0.080 0.081 0.098 0.079 0.094 0.080 0.102 0.06975 0.088 0.152 0.153 0.156 0.119 0.147 0.135 0.179 0.16050 0.134 0.175 0.317 0.223 0.269 0.247 0.195 0.25340 0.261 0.356 0.303 0.335 0.269 0.408 0.27530 0.383 0.482 0.547 0.520 0.559 0.53220 0.870 1.026 1.001 0.861 0.87910 1.969 3.786

    Table G.2: standard deviation on estimated frequency (Hz)

    59

  • 60 APPENDIX G. SIMULATION RESULTS

    Input frequency (Hz)N 50 75 100 150 200 300 400 500 750 1000400 0.169 0.117 0.091 0.062 0.040 0.030 0.021 0.015 0.010 0.006200 0.508 0.349 0.324 0.233 0.175 0.118 0.088 0.061 0.054 0.051150 1.670 0.214 0.439 0.036 0.287 0.172 0.153 0.106 0.129 0.069100 2.005 2.463 0.992 0.642 0.629 0.464 0.347 0.272 0.155 0.16575 0.888 3.192 0.434 0.851 0.150 0.599 0.495 0.532 0.07150 3.666 4.589 1.999 1.233 1.263 0.737 0.637 0.62740 0.184 2.874 0.025 1.728 1.503 1.216 0.82430 0.706 1.997 0.331 2.005 1.073 2.00220 2.081 5.458 4.552 2.244 4.33510 11.299 13.031

    Table G.3: absolute error (Hz)

    Input frequency (Hz)N 50 75 100 150 200 300 400 500 750 1000400 0.0034 0.0016 0.0009 0.0004 0.0002 0.0001 0.0001 0.0000 0.0000 0.0000200 0.0102 0.0047 0.0032 0.0016 0.0009 0.0004 0.0002 0.0001 0.0001 0.0001150 0.0334 0.0029 0.0044 0.0002 0.0014 0.0006 0.0004 0.0002 0.0002 0.0001100 0.0401 0.0328 0.0099 0.0043 0.0031 0.0015 0.0009 0.0005 0.0002 0.000275 0.0118 0.0319 0.0029 0.0043 0.0005 0.0015 0.0010 0.0007 0.000150 0.0367 0.0306 0.0100 0.0041 0.0032 0.0015 0.0008 0.000640 0.0012 0.0144 0.0001 0.0043 0.0030 0.0016 0.000830 0.0035 0.0067 0.0008 0.0040 0.0014 0.002020 0.0069 0.0136 0.0091 0.0030 0.004310 0.0226 0.0174

    Table G.4: relative error

    Input frequency (Hz)N 50 75 100 150 200 300 400 500 750 1000400 2 3 4 6 8 12 16 20 30 40200 1 1.5 2 3 4 6 8 10 15 20150 0.75 1.125 1.5 2.25 3 4.5 6 7.5 11.25 15100 0.5 0.75 1 1.5 2 3 4 5 7.5 1075 0.5625 0.75 1.125 1.5 2.25 3 3.75 5.625 7.550 0.5 0.75 1 1.5 2 2.5 3.75 540 0.6 0.8 1.2 1.6 2 3 430 0.6 0.9 1.2 1.5 2.25 320 0.6 0.8 1 1.5 210 0.5 0.75

    Table G.5: number of periods in sample