54284953 Linear Prediction for Speech Encoding

download 54284953 Linear Prediction for Speech Encoding

of 66

Transcript of 54284953 Linear Prediction for Speech Encoding

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    1/66

    Linear Prediction using Lattice Filters

    and its application in Speech

    Compression

    Abstract

    The project aims at studying the theory of Linear Prediction of Stationary Signals and

    using this knowledge in the application of compression of Speech. Linear Prediction can

    be achieved using a number of methods. We have used the Lattice Filter method of Linear

    Prediction and highlighted the advantages of this method. The calculation of p

    prediction coefficient involves the inversion of a P X P matrix. This involves O(p3)

    operations. To reduce the number of operations we have used the Levinson Durbin

    algorithm which exploits a special property of the autocorrelation matrix to reduce the

    number of operations to O(p2).

    To implement this technique on speech signal, we have segmented and windowed the

    speech samples in order to treat it as a stationary signal. We have analyzed the forward

    prediction error and the recreated signal on several criterion such as the order of prediction, size of the window segment and number of bits used to encode the error

    sequence. We have also created compressed sound files which can be heard to get an

    idea of the result obtained.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    2/66

    2

    Contents

    1. Introduction..................................................................................................................... 51.1 Human Speech Production........................................................................................ 5

    1.2 Theory of Speech Coding ......................................................................................... 7

    1.3 Historical Perspective of Linear Predictive Coding.................................................. 92. Linear Prediction........................................................................................................... 11

    2.1 Innovations representation of a Random Process................................................... 122.2 Rational Power Spectra........................................................................................... 142.3 Relationships between the Filter Parameters and the Autocorrelation Sequence... 16

    2.4 Theory of linear Prediction ..................................................................................... 18

    2.4.1 The Autocorrelation Method............................................................................ 19

    2.4.2 The Covariance Method................................................................................... 20

    3. Lattice Filters ................................................................................................................ 243.1 Prediction Model Order Selection .......................................................................... 29

    4. The Levinson-Durbin Algorithm................................................................................. 315. Progress Report. ............................................................................................................ 36

    6. Observations and Results.............................................................................................. 44

    6.1 Effect of order of prediction ................................................................................... 446.1.1 Spectrum of the Error Signal ........................................................................... 47

    6.2 Effect of segment size............................................................................................. 54

    6.3 Effect of quantisation and compression of error signal. ......................................... 567.1 Conclusions................................................................................................................. 64

    7.2 Future Scope of the Project......................................................................................... 65

    8. References..................................................................................................................... 66A. Appendix I: New MATLAB Functions. ...................................................................... 68B. Appendix II: MATLAB Codes .................................................................................... 83

    C. Appendix III C- Codes ............................................................................................. 90

    Code C.1 Levinson Durbin Algorithm.......................................................................... 90Code C.2 Levinson Durbin Header File ....................................................................... 93

    Code C.3 MA Lattice Filter .......................................................................................... 94

    Code x.3 MA Lattice Filter Header File ....................................................................... 95Code C.3 AR Lattice Filter ........................................................................................... 96

    Code C.3 AR Lattice Filter Header File ....................................................................... 98

    Code C.3 Segmentation and Hanning Window ............................................................ 99

    Code C.3 Segmentation and Hanning Window Header File ...................................... 101

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    3/66

    3

    List of Figures

    Figure 1.1: Human Speech Production System 8

    Figure 1.2: Model of Speech Production...9Figure 2.1: Time and Power Spectral Density Representation14

    Figure 2.2: Filters for generating random process from white noise...16Figure 3.1: Forward Linear Prediction 27Figure 3.2: Prediction Error Filter... 27

    Figure 3.3: Single stage Lattice Filter. 28

    Figure 3.4: Two stage Lattice Filter 29

    Figure 3.5: P stage Lattice Filter. 30

    Figure 5.1: Original, Error and Recreated signal (without segmentation).. 40Figure 5.2: Original, Error and Recreated signal (with non-overlapping segmentation) 41

    Figure 5.3: Original, Error and Recreated signal (with overlapping windowing)42Figure 5.4: Original, Error and Recreated signal with lattice filters 43

    Figure 5.5: Frequency Spectrum of Original Signal 44

    Figure 5.6: Frequency Spectrum of Recreated Signal. 45Figure 6.1: Graph of Prediction Gain Vs Order of Prediction..49

    Figure 6.2: Frequency representation of error when order p=2.. 50

    Figure 6.3: Frequency representation of error when order p=6.. 51Figure 6.4: Frequency representation of error when order p=8. .52

    Figure 6.5: Frequency representation of error when order p=12.52

    Figure 6.6: Frequency representation of error when order p=20 52Figure 6.7: Frequency representation of error when order p=40 53Figure 6.8: Frequency representation of original signal. 54

    Figure 6.9: Frequency representation of original signal (Shifted).. 55

    Figure 6.10: Frequency representation of recreated signal (Shifted) p=856Figure 6.11: Frequency representation of recreated signal (Shifted) p=12. 56

    Figure 6.12: Window size Vs Predictive Gain 58

    Figure 6.13: Original Signal encoded in 8 bits 59Figure 6.14: Recreated Signal when error is encoded in 8 bits 60

    Figure 6.15: Recreated Signal when error is encoded in 7 bits.... 60

    Figure 6.16: Recreated Signal when error is encoded in 6 bits.61

    Figure 6.17: Recreated Signal when error is encoded in 5 bits 61Figure 6.18: Recreated Signal when error is encoded in 4 bits.62

    Figure 6.19: Recreated Signal when error is encoded in 3 bits.62Figure 6.20: Recreated Signal Spectrum when error is encoded in 8 bits 63

    Figure 6.21: Recreated Signal Spectrum when error is encoded in 7 bits.63

    Figure 6.22: Recreated Signal Spectrum when error is encoded in 6 bits.64

    Figure 6.23: Recreated Signal Spectrum when error is encoded in 5 bits 64Figure 6.24: Recreated Signal Spectrum when error is encoded in 4 bits.65

    Figure 6.25: Recreated Signal Spectrum when error is encoded in 3 bits 65

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    4/66

    4

    Chapter 1

    INTRODUCTION

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    5/66

    5

    1. Introduction

    Linear prediction modelling is used in a diverse area of applications, such as data

    forecasting, speech coding, video coding, speech recognition, model-based interpolation,

    signal restoration and impulse/step event detection. In our project we would be studying

    and implementing linear predictive coding (lpc) for speech compression.

    1.1 Human Speech Production

    Regardless of the language spoken, all people use relatively the same anatomy to produce

    sound. The output produced by each humans anatomy is limited by the laws of physics.

    The process of speech production in humans can be summarized as air being pushed from

    the lungs, through the vocal tract, and out through the mouth to generate speech. In this

    type of description the lungs can be thought of as the source of the sound and the vocal

    tract can be thought of as a filter that produces the various types of sounds that make up

    speech. The above is a simplification of how sound is really produced.

    Figure 1.1: Human Speech Production System.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    6/66

    6

    Phonemes are defined as a limited set of individual sounds. There are two categories of

    phonemes, voiced and unvoiced sounds. Voiced sounds are usually vowels and often

    have high average energy levels and very distinct resonant or formant frequencies.

    Voiced sounds are generated by air from the lungs being forced over the vocal cords. As

    a result the vocal cords vibrate in a somewhat periodically pattern that produces a series

    of air pulses called glottal pulses. The rate at which the vocal cords vibrate is what

    determines the pitch of the sound produced. Unvoiced sounds are usually consonants and

    generally have less energy and higher frequencies then voiced sounds. The production of

    unvoiced sound involves air being forced through the vocal tract in a turbulent flow.

    During this process the vocal cords do not vibrate, instead, they stay open until the soundis produced.

    The amount of air that originates in the lungs also affects the production of sound in

    humans. The air flowing from the lungs can be thought of as the source for the vocal tract

    which act as a filter by taking in the source and producing speech. The higher the volume

    of air that goes through the vocal tract, the louder the sound.

    Figure 1.2: Model of Speech Production.

    Pitch period

    Vocal Tract

    Parameters

    Impulse Trainenerat r

    Random Noise

    enerat r

    Voiced / Unvoiced switch

    X(n)Time-Varying

    Digital Filter

    G

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    7/66

    7

    Some of the fundamental properties of the speech signal that can be successfully

    exploited for compression of speech include the quasi-stationary nature of the speech

    signal. Quasi-stationary means that speech can be treated as a stationary signal for short

    intervals of time. This allows us to use techniques which are generally used for stationary

    signals for processing speech signals. The amplitude of speech signal varies slowly with

    time, which is another characteristic that is commonly exploited for compression

    purpose.

    1.2 Theory of Speech Coding

    The recent exponential growth of telecommunications today drives all aspects of

    technology to higher degrees than ever before, creating the need to transfer maximal

    amount of information consuming minimal relevant resources. Due to the impact on

    parameters such as bandwidth requirements and conversation quality, the most important

    component of any telephony system is that which generates the digital representation of

    the speech.

    Linear Predictive Coding (LPC) is defined as a digital method for encoding an analog

    signal in which a particular value is predicted by a linear function of the past values of

    the signal. It was first proposed as a method for encoding human speech by the United

    States Department of Defense in federal standard 1015, published in 1984.

    There exist many different types of speech compression that make use of a variety of

    different techniques. However, most methods of speech compression exploit the fact that

    speech production occurs through slow anatomical movements and that the speech

    produced has a limited frequency range. The frequency of human speech production

    ranges from around 300 Hz to 3400 Hz. Speech compression is often referred to as

    speech coding which is defined as a method for reducing the amount of information

    needed to represent a speech signal. There are many other characteristics about speech

    production that can be exploited by speech coding algorithms. One fact that is often used

    is that period of silence take up greater than 50% of conversations. An easy way to save

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    8/66

    8

    bandwidth and reduce the amount of information needed to represent the speech signal is

    to not transmit the silence. Another fact about speech production that can be taken

    advantage of is that mechanically there is a high correlation between adjacent samples of

    speech. Most forms of speech compression are achieved by modeling the process of

    speech production as a linear digital filter. The digital filter and its slow changing

    parameters are usually encoded to achieve compression from the speech signal.

    Any signal processing system that aims to achieve utmost economy in the digital

    representation of speech for storage or transmission must be based on the physical

    constraints of our speech production apparatus and must exploit the limitations of humanhearing. It is wasteful to reserve costly bits for signals that the human mouth (and nose)

    can never emit; it is equally wasteful to represent signal differences in the encoded bit

    stream that the human ear can never distinguish.

    Speech coding or compression is usually conducted with the use of voice coders or

    vocoders. There are two types of voice coders: waveform-following coders and model-

    base coders. Waveform following coders will exactly reproduce the original speech

    signal if no quantization errors occur. Model-based coders will never exactly reproduce

    the original speech signal, regardless of the presence of quantization errors, because they

    use a parametric model of speech production which involves encoding and transmitting

    the parameters not the signal. LPC vocoders are considered model-based coders which

    mean that LPC coding is lossy even if no quantization errors occur.

    The general algorithm for linear predictive coding involves an analysis or encoding part

    and a synthesis or decoding part. In the encoding, LPC takes the speech signal in blocks

    or frames of speech and determines the input signal and the coefficients of the filter that

    will be capable of reproducing the current block of speech. This information is quantized

    and transmitted. In the decoding, LPC rebuilds the filter based on the coefficients

    received. The filter can be thought of as a tube which, when given an input signal,

    attempts to output speech. Additional information about the original speech signal is used

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    9/66

    9

    by the decoder to determine the input or excitation signal that is sent to the filter for

    synthesis.

    1.3 Historical Perspective of Linear Predictive Coding

    The history of audio and music compression begin in the 1930s with research into pulse-

    code modulation (PCM) and PCM coding. Compression of digital audio was started in

    the 1960s by telephone companies who were concerned with the cost of transmission

    bandwidth. Linear Predictive Codings origins begin in the 1970s with the development

    of the first LPC algorithms. Adaptive Differential Pulse Code Modulation (ADPCM),

    another method of speech coding, was also first conceived in the 1970s.

    The history of speech coding makes no mention of LPC until the 1970s. However, the

    history of speech synthesis shows that the beginnings of Linear Predictive Coding

    occurred 40 years earlier in the late 1930s. The first vocoder was described by Homer

    Dudley in 1939 at Bell Laboratories. Dudley developed his vocoder, called the Parallel

    Bandpass Vocoder or channel vocoder, to do speech analysis and re-synthesis. LPC is a

    descendent of this channel vocoder. The analysis/synthesis scheme used by Dudley is the

    scheme of compression that is used in many types of speech compression such as LPC.

    The idea of using LPC for speech compression came up in 1966 when Manfred R.

    Schroeder and B.S. Atal turned their attention to the following: for television pictures, the

    encoding of each picture element (pixel) as if it was completely unpredictable is of

    course rather wasteful, because adjacent pixels are correlated. Similarly, for voiced

    speech, each sample is known to be highly correlated with the corresponding sample that

    occurred one pitch period earlier. In addition, each sample is correlated with the

    immediately preceding samples because the resonances of the vocal tract. Therefore short

    durations of speech show an appreciable correlation.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    10/66

    10

    Chapter 2

    LINEAR PREDICTION

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    11/66

    11

    2. Linear Prediction

    The success with which a signal can be predicted from its past samples depends on the

    autocorrelation function, or equivalently the bandwidth and the power spectrum, of the

    signal. As illustrated infigure 2.1, in the time domain, a predictable signal has a smooth

    and correlated fluctuation, and in the frequency domain, the energy of a predictable

    signal is concentrated in narrow band/s of frequencies. In contrast, the energy of an

    unpredictable signal, such as white noise, is spread over a wide band of frequencies.

    Figure 2.1: Time and Power Spectral Density Representation.

    For a signal to have a capacity to convey information it must have a degree of

    randomness. Most signals, such as speech, music and video signals are, partially

    predictable and partially random. These signals can be modeled as the output of a filter

    excited by an uncorrelated input. The random input models the unpredictable part of the

    signal, whereas the filter models the predictable structure of the signal. The aim of linear

    prediction is to model the mechanism that introduces the correlation in a signal.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    12/66

    12

    2.1 Innovations representation of a Random Process

    A wide sense stationary random process may be represented as the output of a causal and

    causally invertible linear system excited by a white noise process. The condition that the

    system is causally invertible also allows us to represent the wide sense stationary random

    process by the output of the inverse system, which is a white process. This statement is

    explained below.

    Let us consider a wide-sense stationary process x(n) with the autocorrelation sequence

    )(mxx and power spectral density )(fxx

    , | f|

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    13/66

    13

    where by definition 2w =exp[v(0)]

    and 11

    ,)(exp)( rzzmvzHm

    m

    >

    =

    =

    On evaluating the above equation on the unit circle, we have the equivalent

    representation of the power spectral density as

    22 )()( fHf wxx = 2.4

    The filter with the system function H(z) is analytic in the region |z|>r1

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    14/66

    14

    2.2 Rational Power Spectra

    Considering the power spectral density of the stationary random process x(n) is a rational

    function, expressed as

    ,)()(

    )()()(

    1

    12

    =zAzA

    zBzBz wxx 21 rzr 2.7

    where bk and ak are the filter coefficients that determine the location of the zeros and

    poles of H(z), respectively. Thus H(z) is causal, stable, and minimum phase. Its

    reciprocal 1/H(z) is also a causal, stable, and minimum phase linear system. Therefore the

    random process x(n) uniquely represents the statistical properties of the innovation

    process w(n), and vice versa.

    For the linear system with the rational system function H(z) given by above equation, the

    outputx(n) is related to the input w(n) by the following difference equation

    ==

    =+

    q

    k

    k

    p

    k

    k knwbknxanx01

    )()()( 2.8

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    15/66

    15

    Distinguishing among three special cases:

    Autoregressive (AR) Process: b0 = 1, bk= 0, k > 0

    In this case the linear filter H(z) = 1/A(z) is an all-pole filter and the difference equation

    for the input-output relationship is

    )()()(1

    nwknxanxp

    k

    k =+=

    2.9

    In turn, the noise-whitening filter for generating the innovations process is an all zero

    filter.

    Moving Average (MA) process : ak= 0, k >= 1

    In this case the linear filter H(z) = B(z) is an all-zero filter and the difference equation for

    the input-output relationship is

    =

    =

    q

    k

    k knwbnx0

    )()( 2.10

    The noise-whitening filter for generating the innovations process is an all pole filter.

    Autoregressive, Moving Average (ARMA) Process : In this case the linear filter

    H(z) = B(z)/A(z) has both poles and zeros in the z-plane and the corresponding difference

    equation is given by 2.8. The inverse system for generating the innovations process w(n)

    fromx(n) is also a pole-zero system of the form 1/H(z) = A(z)/B(z).

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    16/66

    16

    2.3 Relationships between the Filter Parameters and theAutocorrelation Sequence

    When the power spectral density of the stationary random process is a rational function,

    there is a basic relationship that exists between the autocorrelation sequence)(mxx and

    the parameters akand bkof the linear filter H(z) that generates the process by filtering the

    white noise sequence w(n). This relationship may be obtained by multiplying the

    difference equation in 2.8 by x*(n-m) and taking the expected value of the both sides of

    the resulting equation, to get

    ==

    +=

    q

    k

    wxk

    p

    k

    xxkxx kmbkmam01

    )()()(

    2.11

    where)(mwx is the cross-correlation sequence between w(n) andx(n).

    The cross-correlation sequence)(mwx is related to the filter impulse response, as shown

    below

    [ ])()()( * mnwnxEmwx +=

    +=

    =

    )()()(0

    * mnwknwkhEk 2.12

    )(2 mhw =

    where in the last step, it was assumed that the sequence w(n) is white.

    From 2.12 the following relationship is obtained:

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    17/66

    17

    =

    =

    =

    +

    =

    0),(

    0,)()(

    ),(

    )(

    *

    1 0

    2

    1

    mm

    qmbkhkma

    qmkma

    m

    xx

    p

    k

    mq

    k

    mkwxxk

    p

    k

    xxk

    xx

    2.13

    This represents a nonlinear relationship between)(mxx and the parameters akand bk.

    The relationship in 2.13 applies, in general, to the ARMA process. For an AR process,

    2.13simplifies to

    =

    =

    =

    0),(

    0,)(

    0),(

    )(

    *

    1

    2

    1

    mm

    mkma

    mkma

    m

    xx

    p

    k

    wxxk

    p

    k

    xxk

    xx

    2.14

    Thus a linear relationship is obtained between)(mxx and the parameters ak and bk.

    These equations are called the Yule-Walker equations and may be expressed in the matrix

    form

    =

    0

    ....0

    0

    ....

    1

    )0(........)2()1(

    ........................

    )2(........)0()1(

    )1(........)1()0(

    2

    2

    *

    ** w

    p

    xxxxxx

    xxxxxx

    xxxxxx

    a

    a

    a

    pp

    p

    p

    2.15

    This correlation matrix is Toeplitz and hence it can be efficiently inverted by the use of

    Levinson-Durbin algorithm as shown later.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    18/66

    18

    2.4 Theory of linear Prediction

    Linear prediction involves predicting the future values of a stationary random process

    from the observation of past values of the process. Consider, in particular, a one step

    forward linear predictor, which forms the prediction of the value x(n) by a weighted

    linear combination of the past values x(n-1),x(n-2) ... x(n-p).

    Hence linearly predicted value of x(n) is

    )(^

    nx = - )()(1

    knxkapk

    k

    p =

    =

    2.16

    Where the ap(k) represent the weights in the linear combination. These weights are

    called prediction coefficients of the one step forward linear predictor of order P. The

    negative sign in the definition of x(n) is for mathematical convenience.

    The difference between the value x(n) and the predicted value )(^

    nx is called the forward

    prediction error, denoted by fp(n) ,

    )()()(^

    nxnxnfp

    = = )(nx + )()(1

    knxkapk

    k

    p =

    =

    2.17

    For information bearing signals, the prediction error fp(n) may be regarded as the

    information, or the innovation, content of the sample.

    To calculate the optimum prediction coefficients for our prediction filter we minimize the

    mean square error i.e.

    ( )()(^

    nxnx )2

    is minimum.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    19/66

    19

    Two approaches can obtain the LPC coefficients akcharacterizing an all-pole H(z) model.

    The least mean square method selects ak to minimize the mean energy in e(n) over a

    frame of signal data, while the lattice filter approach permits instantaneous updating of

    the coefficients.

    The first of the two common least squares technique is the autocorrelation method,

    which multiplies the speech signal by a window w(n) so that x (n)=w(n)x(n) has a finite

    duration.

    The autocorrelation sequence rss describes the redundancy in the signal x(n).

    2.18

    wherex(n), n = {-P, (-P) + 1, . . . ,N - 1} are the known samples and the

    N is a normalizing factor.

    2.4.1 The Autocorrelation Method

    In this method, the speech segment is assumed zero outside the interval .10 Nm

    Thus the Speech Sample can be expressed as

    ( )( ) ( )

    +

    =otherwise

    Nmmwnmxmxn

    ,0

    10,.

    2.19

    Another least square technique called covariance method windows the error signal,

    instead of the actual speech signal.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    20/66

    20

    Autocovariance measures the redundancy in a signal

    2.20

    2.4.2 The Covariance Method

    An alternative to using a weighting function or window for defining xn(m) is to fix the

    interval over which the mean squared error is computed to the range

    10 Nm

    And use the unweighted speech directly. That is

    ( )meEN

    nn

    =

    1

    0

    2

    where( )kin , is defined as

    ( ) ( ) ( )

    =

    =

    1

    0,,

    N

    m

    nnn kmximxki pk

    pi

    0

    1

    2.21

    or by a change of variable

    ( ) ( ) ( ) += kimxmxki nnn , pk

    pi

    0

    1

    2.22

    Using the extended speech interval to define the covariance values, ( )ki, , the matrix

    form of the LPC analysis equation becomes,

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    21/66

    21

    ( ) ( ) ( )

    ( ) ( ) ( )

    ( ) ( ) ( )

    ( )

    ( )

    ( )

    =

    p,0

    :

    2,0

    1,0

    :

    pp,.........p,2p,1

    :.........::

    p2,.........2,22,1

    p1,.........1,21,1

    2

    1

    pa

    a

    a

    2..23

    The resulting covariance matrix is symmetric (since( ) ( )ikki nn ,, = ) but not Toeplitz,

    and can be solved efficiently be a set of techniques called the Cholesky decomposition.

    The mean-square value of the forward linear prediction errorfp(n) based on the

    autocorrelation method is

    = = =

    ++==

    p

    k

    p

    k

    p

    l

    xxppxxpxxp

    f

    p klkalakkanfE1 1 1

    **2 )()()()]()(Re[2)0(]|)([| 2.24

    Now, fp is a quadratic function of the predictor coefficients, and its minimization leads

    to the set of linear equations

    )(lxx = - plklkap

    k

    xxp ,......,2,1.....,.........)()(1

    ==

    2.25

    These are called the normal equations for the coefficients of the linear predictor. The

    minimum mean-square prediction error is thus :

    )()()0(]min[1

    kkaE xx

    p

    k

    pxx

    f

    p

    f

    p += =

    2.26

    Writing eq 2.26 in terms of vectors

    xxxx rR =pa f 2.27

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    22/66

    22

    or which the predictor coefficients can be obtained as

    xxxx rR1

    p )(a

    = 2.28

    A question may arise as to whether to use the autocorrelation method or the covariance

    method in estimating the predictor parameters. The covariance method is quite general

    and can be used with no restrictions. The only problem is that of stability of the resulting

    filter. In the autocorrelation method on the other hand, the filter is guaranteed to be

    stable, but problems of the parameter accuracy can arise because of the necessity of the

    windowing (truncating) the rime signal. This is usually a problem if the signal is a portion

    of an impulse response. For example, if the impulse response of an all-pole filter is

    analyzed by covariance method, the filter parameters can be computed accurately from

    only a finite number of samples of the signal. Using the autocorrelation method, one can

    not obtain the exact parameters values unless the whole infinite impulse response is used

    in the analysis. However, in practice, very good approximations can be obtained by

    truncating the impulse response at a point where most of the decay of the response has

    already occurred.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    23/66

    23

    Chapter 3

    LATTICE FILTERS

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    24/66

    24

    3. Lattice Filters

    Linear prediction can be viewed as being equivalent to linear filtering where the predictor

    is embedded in the linear filter, as shown in figure 3.1.

    Figure 3.1: Forward Linear Prediction.

    This is called a prediction-error filterwith the input sequence x(n) and the output

    sequencefp(n). An equivalent realization for the prediction-error filter is shown in fig 3.2

    Figure 3.2: Prediction Error Filter.

    x(n) p(n)

    _

    +

    )(^

    nxx(n-1)z

    -1 Forward

    Linear

    Predictor

    fp(n)

    x(n)

    ap(p)ap(p-1)ap(3)ap(2)ap(1)1

    z-1 z-1 z-1 z-1

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    25/66

    25

    This realization is a direct-form FIR filter with the system function given as

    =

    =

    p

    k

    k

    pp zkazA0

    )()( 3.1

    where, by definition, ap(0) = 1.

    Prediction Error filters can be realised in other way also, which take the form of a lattice

    structure. To find a relationship between the Lattice filter coefficients and the FIR filter

    structure, let us begin with a predictor of order p=1. The output of such a filter is

    )1()1()()( 11 += nxanxnf 3.2

    This output can be obtained from the single stage lattice filter, as shown in figure 3.3

    below, by exciting both the inputs byx(n) and selecting the output from the top branch.

    Figure 3.3: Single stage Lattice Filter.

    Thus the output is exactly that given by above equation if we select K1 = a1(1). The

    parameterK1 in the lattice filter is called a reflection coefficient.

    The negated reflection coefficient, - km , is also called the partial correlation (PARCOR)

    coefficient

    f0(n)

    g0(n-1)g0(n) g1(n)

    f1(n)

    K

    K

    z-1

    n

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    26/66

    26

    Next, considering a predictor of order p = 2. For this case, the output of the direct-form

    FIR filter is :

    )2()2()1()1()()( 222 ++= nxanxanxnf 3.3

    By cascading two lattice stages as shown in the figure 3.4, it is possible to obtain the

    same output as above, as shown.

    Figure 3.4: Two stage Lattice Filter.

    The two outputs form the first stage are :

    )1()()(

    )1()()(

    *

    11

    11

    +=

    +=

    nxnxKng

    nxKnxnf3.4

    Similarly, the two outputs from the second stage are :

    )1()()(

    )1()()(

    11

    *

    22

    1212

    +=

    +=

    ngnfKng

    ngKnfnf3.5

    Substituting the value off1(n) andg1(n-1) in above equations yield:

    )2()1()()()( 22*

    112 +++= nxknxkkknxnf 3.6

    g2(n)

    f2(n)

    K2*

    K

    f0(n)

    g0(n-1)

    g0(n) g1(n)

    f1(n)

    K*

    K

    -1

    x n

    -1

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    27/66

    27

    On equating coefficients we get,

    a2(2)=K2 and a2(1)= )( 2*

    11 kkk + 3.7

    or equivalently,

    K2 = a2(2), K1 = a1(1) 3.8

    By continuing this process, the equivalence between an mth order direct-form FIR filter

    and an mth stage lattice filter can be demonstrated. The lattice is described by the

    following set oforder-recursive equations:

    )1()()(

    )1()()(

    )()()(

    11

    *

    11

    00

    +=

    +=

    ==

    ngnfKng

    ngKnfnf

    nxngnf

    mmmm

    mmmm

    pm

    pm

    ,......,2,1

    ,......,2,1

    =

    =

    3.9

    A p-stage lattice filter for pth

    order predictor can be shown as follows :

    Figure 3.5: P stage Lattice Filter.

    As a consequence of the equivalence between the direct form prediction error filter and

    the FIR lattice filter, the output of the p-stage lattice filter is expressed as :

    =

    ==

    p

    k

    ppp aknxkanf0

    1)0(.....................),........()()( 3.10

    gp(n)

    fp(n)

    g2(n)

    f2(n)

    g1(n)

    f1(n)

    g0(n)

    f0(n)

    First

    Stage

    Second

    Stage

    pth

    Stage

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    28/66

    28

    The lattice forms characterization requires only p reflection coefficients Ki for a p step

    linear predictor in comparison with the p(p+1)/2 filter coefficients required by the FIR

    filter implementation. The reason that the lattice provides a more compact representation

    is because appending stages to the lattice does not alter the parameters of the previous

    stages. On the other hand, appending the pth

    stage to a FIR based predictor results in

    system function Ap(z) that has coefficients totally different from the coefficients of the

    lower-order FIR filter with system functionAp-1(z).

    Although the direct-form implementation of the linear predictor is the most convenient

    method, for many applications, such as transmission of the predictor coefficients inspeech coding, it is advantageous to use the lattice form of predictor. This is because the

    lattice form can be conveniently checked for stability. That is, for a stable model, the

    magnitude of reflection coefficient is bounded by unity, and therefore it is relatively easy

    to check a lattice structure for stability.

    The quantization of the filter coefficients for transmission can create a major problem

    since errors in the filter coefficients can lead to instability in the vocal tract filter and

    create an inaccurate output signal. This potential problem is averted by quantizing and

    transmitting the reflection coefficients that are generated by the Levinson-Durbin

    algorithm. These coefficients can be used to rebuild the set of filter coefficients {a i} and

    can guarantee a stable filter if their magnitude is strictly less than one.

    A major attraction of a lattice structure is its modular form and the relative ease with

    which the model order can be extended. Furthermore a perturbation of the parameter of

    any section of the lattice structure has a limited and more localized effect.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    29/66

    29

    3.1 Predictor Model Order Selection

    One procedure for the determination of the correct model order is to increment the model

    order, and monitor the differential change in the error power, until the change levels off.

    The incremental change in error power with the increasing model order from i-1 to I is

    defined as

    )()1()( iii EEE = 3.11

    The orderp beyond which the decrease in the error power becomes less than a thresholdis taken as the model order.

    When the model order is less than the correct order, the signal is under-modelled. In this

    case, the prediction error is not well decorrelated and will be more than the optima;

    minimum. A further consequence of the under-modelling is a decrease in the spectral

    resolution of the model: adjacent spectral peaks of the signal could be merged and appear

    as a single spectral peak when the model order is too small. When the model order is

    larger than the correct order, the signal is over-modelled. An over-modelled problem can

    result in an ill-conditioned matrix equation, unreliable numerical solutions and the

    appearance of spurious spectral peaks in the model.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    30/66

    30

    Chapter 4

    Levinson Durbin Algorithm

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    31/66

    31

    4. The Levinson-Durbin Algorithm

    The Levinson Durbin algorithm is a computationally efficient algorithm for solving the

    normal equations. It is named so in recognition of its first use by Levinson(1947) and

    then its independent reformulation at a later date by Durbin(1960).

    ,0)()(0

    ==

    klka xx

    p

    k

    p l=1, 2, , p, ap(0)=1 4.1

    for the prediction coefficients. This algorithm exploits the special symmetry in the

    autocorrelation matrix

    =

    )0(........)2()1(

    ........................

    )2(........)0()1(

    )1(........)1()0(*

    **

    xxxxxx

    xxxxxx

    xxxxxx

    p

    pp

    p

    p

    4.2

    Since )(),( jiji pp = , so the autocorrelation matrix is a Toeplitz matrix. Also, since

    ),(),( * jiji pp = , the matrix is also Hermitian.

    The key to the Levinson Durbin method of solution that exploits the Toeplitz property of

    the matrix is to proceed recursively, beginning with a predictor of the order m=1 (one

    coefficient) and to increase the order recursively, using the lower order solutions to

    obtain the solution to the next higher order. Thus the solution to the first order predictor

    obtained by solving the equation is

    )0(/)1()1(1 xxxxa = 4.3

    and the resulting minimum mean square error (MMSE) is

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    32/66

    32

    ]|)1(|1)[0( 211 aE xxf

    = 4.4

    The next step is to solve for the coefficients a2(1) and a2(2) of the second order predictor

    and express the solution in terns of a1(1). The resulting equations are

    )2()0()2()1()1(

    )1()1()2()0()1(

    22

    *

    22

    xxxxxx

    xxxxxx

    aa

    aa

    =+

    =+4.5

    By using the solution in 4.4 to eliminate )1(xx , the following equations are obtained

    ])1(1)[0(

    )2()1()1()2(

    2

    1

    1

    2

    a

    aa

    xx

    xxxx

    +=

    4.6

    f

    xxxx

    E

    a

    1

    1 )2()1()1( +=

    )1()2()1()1( *1212 aaaa += 4.7

    In this manner to represent in terms of a recursive equation we express the coefficients of

    the mth order predictor in terms of the coefficients of the (m-1)st order predictor, We can

    write the coefficient vector am as the sum of two vectors, namely

    +

    =

    =

    m

    mm

    m

    m

    m

    m

    m

    K

    da

    ma

    a

    a

    a

    ...

    0

    ...

    )(

    ....)3(

    )2(

    )1( 11

    a 4.8

    where am-1 is the predictor coefficient vector of the (m-1)st order predictor and the vector

    dm-1 and the scalar Km are the scalar Km are to be determined. For this the m X m

    autocorrelation matrix xx as

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    33/66

    33

    =

    )0(1

    *

    11

    xxbtm

    mm

    m

    4.9

    where [ ] ( )tbmxxxxxxbt

    m mm 11 )1(.......)2()1( == .The superscript b on 1m

    denotes the vector [ ])1(.......)2()1(1 = mxxxxxxbt

    m with elements taken in

    reverse order.

    The solution to the equation mmma = may be expressed as

    =

    +

    )(0)0(

    111

    1

    *

    11

    mK

    da

    xx

    m

    m

    mm

    xx

    bt

    m

    mm

    4.10

    This is the key step in the Levinson Durbin algorithm. From 4.10, two equations are

    obtained as follows

    1

    *

    11111 =++ mb

    mmmmmm Kda 4.11

    )()0(11111 mKda xxxxmmbt

    mm

    bt

    m =++ 4.12

    Since 111 = mmm a , 4.11 yields the solution

    *

    1

    1

    11

    b

    mmmm Kd

    =

    ==

    )1(

    ...

    )2(

    )1(

    *

    1

    *1

    *

    1

    *

    11

    m

    m

    m

    m

    b

    mmm

    a

    ma

    ma

    KaKd 4.13

    The scalar equation 4.12 can be used to solve forKm

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    34/66

    34

    ( )

    ( ) * 11

    11

    0

    +

    =

    m

    bt

    mxx

    m

    bt

    mxx

    ma

    amK

    4.14

    Thus substituting the solutions in 4.13 and 4.14 into 4.8, we get the recursive equations

    for the predictor coefficients and the reflection coefficients for the lattice filters. The

    recursive equations are given as

    ( )( )

    ( )

    ( )

    ( ) ( )( ) ( ) ( ) )(

    )(

    0

    *

    11

    *

    11

    11

    *

    11

    11

    kmamakakakmaKkaka

    E

    am

    a

    amKma

    mmmm

    mmmm

    f

    m

    m

    bt

    mxx

    m

    bt

    mxx

    m

    bt

    mxx

    mm

    +=

    +=

    +=

    +==

    4.15

    From the equations we note that the predictor coefficients form a recursive set of

    equations. Km is the reflection of the mth

    stage. Also Km=am(m), the mth coefficient of

    the mth

    stage.

    The important virtue of the Levinson-Durbin algorithm is its computational efficiency, in

    that its use results in a big saving in the number of operations. The Levinson-Durbin

    recursion algorithm requires O(m) multiplications and additions(operations) to go from

    stage m to stage m+1. Therefore, for p stages it will take on the order of 1+2+3++p =

    p(p+1)/2, or O(p2) operations to solve for the prediction filter coefficients or the

    reflection coefficients, compared with O(p3) operations if the Toeplitz property is not

    exploited.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    35/66

    35

    Chapter 5

    Progress Report

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    36/66

    36

    5. Progress Report.

    As required by the normal equations, we have to calculate the autocorrelation matrix and

    solve the equation ap=(Rxx)-1 rxx. The first program made by us calculated the

    autocorrelation matrix and compared the value of the prediction coefficients obtained by

    using the matrix inversion function of MATLAB6p1 with the linear prediction function

    of MATLAB6p1. The results are shown below.

    Using inv() function Using lpc() function

    1.0000 1.0000-1.6538 -1.6538

    1.3089 1.3089

    -1.0449 -1.0449

    0.4729 0.4729

    -0.0588 -0.0588

    -0.1057 -0.1057

    0.6557 0.6557

    -0.8568 -0.8568

    0.5507 0.5507

    -0.2533 -0.2533

    -0.0262 -0.0262

    0.1345 0.1345

    Table 5.1

    Since both functions returned identical values, our autocorrelation matrix was correctly

    calculated. Next we implemented the Levinson-Durbin algorithm using the recursive

    relations as explained previously. The function made by us returned both, the prediction

    coefficients and the reflection coefficients. Hence this function can be used to implement

    the linear predictor using either FIR filter or lattice filter.

    To check the accuracy of our function, we made a program that calculated the prediction

    coefficients for a speech input signal and implemented FIR predictive filter to generate

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    37/66

    37

    the error sequence. The same coefficients were then used to implement the inverse filter

    that generated back the speech signal from the error signal. The figure below shows the

    original, error and recreated signal. It is clear that the recreated signal is same as the

    original input signal.

    Figure 5.1: Original, Error and Recreated signal (without segmentation)

    The above sample is for the utterance of the word N-S-I-T. The speech sample is encoded

    in 8bits and sampled at a frequency of 11025Hz corresponding to telephone quality. In all

    the examples in this section, 8th order Linear Prediction is used. The error signal is clearly

    smaller than the original signal and will require fewer bits to encode. This program does

    not use advanced techniques such as segmentation, windowing or end detection. But

    since speech is only quasi stationary and cannot be assumed stationary for such large

    number of samples we will have to perform segmentation, windowing of this speech

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    38/66

    38

    sample. This will further reduce the magnitude of the error signal helping in improved

    compression of the speech.

    Next we divided the speech into non-overlapping segments and again calculated the error

    and the recreated signal. The result is shown below. In this program we have used the

    filter programs created by us. We have used a constant window size of 15ms which

    corresponds to 166 samples.

    Figure 5.2: Original, Error and Recreated signal (with non-overlappingsegmentation)

    The figure shows that using non-overlapping segments cause spikes due to discontinuities

    where errors are large. This is because we are trying to predict speech from 0 at the

    edges. To overcome this limitation, overlapping segments of the speech are taken and

    windowing is done. The most popular windows are the Hamming and Hanning windows.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    39/66

    39

    Below is shown the output when we take overlapping segments overlapping by N/2,

    where N is the window size. Hanning window is used in this example.

    Figure 5.3: Original, Error and Recreated signal (with overlapping segmentationand windowing)

    It can be clearly seen from the above figure that the error signal has become smooth as

    compared to the non-overlapping segmentation case. Hence this error signal can be

    encoded successfully.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    40/66

    40

    As the final program in this preliminary case, we will solve the same case using lattice

    filters instead of the FIR and IIR filters. Lattice filters have a number of advantages over

    FIR/IIR implementation in the case of Linear Prediction. These have already been

    explained earlier.

    Figure 5.4: Original, Error and Recreated signal with lattice filters.

    Hence we can see that Lattice Filters give the same result as the other filters.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    41/66

    41

    The spectrum of the input signal and that of the output signal is shown below. There is no

    difference in the spectrum.

    Figure 5.5: Frequency Spectrum of Original Signal.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    42/66

    42

    Figure 5.6: Frequency Spectrum of Recreated Signal.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    43/66

    43

    Chapter 6

    Observation and Results

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    44/66

    44

    6. Observations and Results

    All software simulations have been performed on MATLAB6p1, which is a registered

    trademark of The Mathworks Incorporation.

    We have shown earlier how we have reached the stage where error is minimum by using

    segmentation of speech and windowing. In this section we will show the effect of

    changing various parameters such as order of prediction, size of window, on the error

    signal and the recreated signal. To take into effect of quantization and compression, we

    will encode the error signal in less number of bits before using it to recreate back the

    speech signal. This will ensure that the observations are accurate.

    The speech sample used for the entire observation will be the word N-S-I-T encoded in 8

    bits sampled at a frequency of 11025 Hz, which corresponds to telephone quality. The

    order of prediction will be 8 and the size of window 15ms corresponding to 166 samples

    of speech. Three different encoding schemes will be used. The error signal will beencoded in 8 bits to 3 bits. During each of the sub-segment, one parameter will be varied

    while all the remaining parameters will hold the value as specified above.

    6.1 Effect of order of prediction

    The order of prediction governs how many previous samples as used to predict the next

    sample. As the order of the predictor is increased, up to the order of the process which

    generated the signal, the power spectrum of the error signal will become more and more

    flat. But it is not possible to increase the order arbitrary since the autocorrelation matrix is

    a p X p matrix, where p is the order, and solving this matrix requires lot of computation

    time. Through large number of experimentation and observation, it has been seen that an

    order of 8-12 is suitable for most speech samples.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    45/66

    45

    In this section we will vary the order and see the effect on the error signal. We will

    measure the predictive gain i.e. the 10 log (variance of the original signal / variance of the

    error signal). A table of the observations is shown below.

    Number of Bits

    Order

    8 7 6 5 4 3

    2 3.8176 3.7277 3.4369 3.0975 2.0599 0.29527

    46.7546 6.6762 6.3967 5.9849 4.8896 3.8079

    6 9.7913 9.6751 9.3888 5.9849 7.9312 7.0689

    8 11.113 11.037 10.683 10.246 9.1502 8.0151

    10 13.269 13.157 12.78 12.402 11.391 12.291

    12 15.264 15.145 14.808 14.236 13.505 14.385

    20 15.493 15.378 15.019 14.556 13.964 14.67

    40 15.225 15.099 14.768 14.368 13.502 14.271

    60 15.539 15.421 15.099 14.607 13.835 14.6

    Table 6.1: Predictive Gain Vs Order of Prediction.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    46/66

    46

    Plotting a graph for the above table.

    Figure 6.1: Graph of Prediction Gain Vs Order of Prediction.

    Index

    8 Bits: Purple

    7 Bits: Green6 Bits: Magenta

    5 Bits: Black

    4 Bits: Blue3 Bits: Red

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    47/66

    47

    6.1.1 Spectrum of the Error Signal

    As the order of prediction increases, the spectrum of the error signal is flattened since the

    prediction filter removed all the correlation from the input signal and gives the output as

    a nearly white noise sequence.

    Below are shown a few plots of the Frequency domain representation of the error signal

    as the order of the predictor is increased. Clearly the spectrum is gradually flattened as

    the order is increased. After the order becomes greater than the order of the system that

    generated the original signal, there would not be any more flattening of the error signal.

    This is because we cannot predict with more accuracy than the actual system that

    generated the signal.

    Figure 6.2: Frequency representation of error when order p=2..

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    48/66

    48

    Figure 6.3: Frequency representation of error when order p=6.

    Figure 6.4: Frequency representation of error when order p=8.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    49/66

    49

    Figure 6.5: Frequency representation of error when order p=12.

    Figure 6.6: Frequency representation of error when order p=20.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    50/66

    50

    Figure 6.7: Frequency representation of error when order p=40.

    From the above figures we can see there is not much flattening of the error after order 20.

    Generally for Speech signal an order of 12 is sufficient and gives good results.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    51/66

    51

    Shown below are the frequency spectrum of the original signal and the recreated signal.

    Figure 6.8: Frequency representation of original signal.

    This figure shows the frequency content of the original signal. Let us represent the

    frequency spectrum in a more convenient form by using the function fftshift().

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    52/66

    52

    Figure 6.9: Frequency representation of original signal (Shifted).

    Below we show the frequency response of the recreated signal.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    53/66

    53

    .Figure 6.10: Frequency representation of recreated signal (Shifted) p=8.

    Figure 6.11: Frequency representation of recreated signal (Shifted) p=12.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    54/66

    54

    6.2 Effect of segment size

    Window size determines the number of segments that the speech is divided into. Since

    speech is quasi stationary, it can be assumed stationary for only a small duration. The

    smaller the segment size, the more number of segments will be created and the program

    will require more computation time. But as the segments size is increased, the predictive

    gain is reduced and hence compression is not very efficient. Hence we have to find a

    balance between the size of the segment and the computation time required.

    Number of Bits

    Window

    Size (ms)

    8 7 6 5 4 3

    5 11.956 11.832 11.499 11.024 9.9649 9.7677

    7 11.164 11.076 10.705 10.294 9.4267 8.2087

    10 11.471 11.4 10.996 10.569 9.4888 8.6293

    13 11.032 10.935 10.618 10.07 9.2504 8.0782

    15 11.113 11.037 10.683 10.246 9.1502 8.0151

    17 11.035 10.944 10.576 10.153 9.1392 8.0709

    20 10.945 10.829 10.492 9.9965 9.0453 7.9843

    23 10.971 10.878 10.51 10.048 9.0963 8.0501

    25 10.984 10.891 10.514 10.085 9.1083 7.9189

    30 11.03 10.943 10.574 10.097 9.0816 8.1633

    50 11.08 10.985 10.636 10.136 9.2119 8.3669

    Table 6.2 Predictive Gain vs Window Segment Size

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    55/66

    55

    Plotting the above table in a graph form.

    Figure 6.12: Window size Vs Predictive Gain.

    Index

    8 bits: Red circles7 bits: Blue circles

    6 bits: Magenta Triangles

    5 bits: Green Stars4 bits: Blue Squares

    3 bits: Black V

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    56/66

    56

    6.3 Effect of quantization and compression of error signal.

    Once the error signal has been quantized into less number of bits, some information has

    been lost and cannot be achieved back. We have to concentrate on how best we can

    reproduce the original signal from this quantized forward prediction error. Below are

    shown the recreated signals for the same word N-S-I-T we have used earlier. The original

    signal is encoded in 8 bits. We will encode the forward prediction error in 7, 6, 5, 4, 3 bits

    and see the output waveform.

    Figure 6.13: Original Signal encoded in 8 bits.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    57/66

    57

    Figure 6.14: Recreated Signal when error is encoded in 8 bits.

    Figure 6.15: Recreated Signal when error is encoded in 7 bits.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    58/66

    58

    Figure 6.16: Recreated Signal when error is encoded in 6 bits.

    Figure 6.17: Recreated Signal when error is encoded in 5 bits.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    59/66

    59

    Figure 6.18: Recreated Signal when error is encoded in 4 bits.

    Figure 6.19: Recreated Signal when error is encoded in 3 bits.

    The loss of quality of speech as the error signal is encoded in less and less bits is obvious.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    60/66

    60

    Figure 6.20: Recreated Signal Spectrum when error is encoded in 8 bits.

    Figure 6.21: Recreated Signal Spectrum when error is encoded in 7 bits.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    61/66

    61

    Figure 6.22: Recreated Signal Spectrum when error is encoded in 6 bits.

    Figure 6.23: Recreated Signal Spectrum when error is encoded in 5 bits.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    62/66

    62

    Figure 6.24: Recreated Signal Spectrum when error is encoded in 4 bits.

    Figure 6.25: Recreated Signal Spectrum when error is encoded in 3 bits.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    63/66

    63

    Chapter 7

    Conclusions and Future Scope of the Project

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    64/66

    64

    7.1 Conclusions

    From our experiments we have verified the accuracy and the efficiency of the Levinson

    Durbin algorithm and its application to Linear Prediction. We have also clearly shown

    that it is possible to compress speech signals in the form of forward prediction error

    together with the filter parameters. If the error sample is stored in sufficient number of

    bits then the recreated signal is of good quality.

    As we have shown above we have managed to encode 8 bit speech sample into a 4 bit

    error value while maintaining intelligibility of the speech. Depending on the application

    and the quality of signal required we can choose the number of bits of precision to store

    the error signal.

    Our results show that use of windowing and segmentation is very useful in order to have

    an efficient compression of the error signal.

    The simulations results show that the prediction order in the range of 8-12 is sufficient

    for linear prediction of speech, and using order above this does not give much

    improvements.

    Figures obtained by our simulations on MATLAB6.1 show that a window size of 5-10

    msec gives very good results for speech samples at 11025 Hz. For good results, the

    length of the window should contain ideally 60 to 100 speech samples.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    65/66

    65

    7.2 Future Scope of the Project

    In our project we successfully simulated compression of speech in MATLAB. We also

    made all the relevant functions on our own so that all the work can be easily implemented

    in Hardware.

    Future students working on this project will find the hardware implementation of our

    project very simple as we have provided a ready made set of important functions along

    with the source code. These functions can be easily coded in C language from which it

    can be converted into the assembly language of any hardware.

    Future work on the project can also be done on improved DSP techniques to further

    reduce error and make compression more efficient.

    The Linear Predictor Coefficients can be calculated faster using Schur Algorithm on

    parallel architecture, which is much more computationally efficient. Hence real-time

    speech processing can be performed.

  • 8/2/2019 54284953 Linear Prediction for Speech Encoding

    66/66

    8. References

    [1] J.Makhoul, Linear Prediction: A tutorial review; Proc of IEEE, vol 63, pp 561-589,

    April 1975.

    [2] John D Markel and A H Gray, A linear prediction vocoder simulation based upon the

    autocorrelation method, IEEE Tran. on Acoustics, speech and signal processing; Vol.

    ASSP: 22, April 1974.

    [3] L.R Rabiner, B S Atal amd M R Sambur, LPC prediction arror: analysis of its

    variation with position of analysis frame, IEEE Tran. on Acoustics, speech and signalprocessing; Vol. ASSP: 25, Oct 1977.

    [4] Advanced Digital Signal Processing, Proakis J.G, Rader M.C, Ling F, Nikias C.L,

    Macmillan Publishing Company, New York, 1992. ISBN- 0-02-396841-9.

    [5] J.Makhoul, Stable and efficient Lattice Methods for Linear Prediction., IEEE Tran.

    On Acoustic Speech and Signal Processing, Vol. ASSP: 25, Oct 1977

    [6] Douglas OShaughnessy, Linear Predictive Coding, IEEE Potentials, Feb-1988

    [7] John E. Roberts and R. H. Wiggins, PIECEWISE LINEAR PREDICTIVE CODING

    (PLPC), The MITRE Corporation, Bedford, Massachusetts 01730, May-1980

    [8] M. A. Atashroo, Autocorrelation Prediction, Advanced Research Project Agency.

    [9] Manfred R. Schroeder, Linear Predictive Coding of Speech: Review and Current

    Directions, IEEE Communications Magazine, Aug-1985, Vol-23, No. 8, pp 54-61

    [10] Advanced Digital Signal Processing and Noise Reduction, Vaseghi Saeed V, John

    Wiley and Sons, 1996. ISBN-0-471-62692-9.

    [11] L.R Rabiner and B.H Juang, Fundamentals of Speech Processing, Prentice Hall,

    1993.

    [12] Simon Haykin, Adaptive Filter Theory, 3rd

    edition, Prentice Hall International, New-

    Jersey, ISBN 0-13-397985-7