Audio Processing

Post on 14-Dec-2014

334 views 1 download

Tags:

description

Speech Processing, Quantisation Compression etc

Transcript of Audio Processing

AUDIO PROCESSING6.3 Quantisation and Transmission of Sound

07/10/2013

1

INTRODUCTION

Follows the process of sampling - Nyquist rate

which is explicitly UNIFORM Once quantized –transmitted or stored Basically 1 . Linear or Uniform 2. Non Linear or Non

Uniform• Protection of weak passages over loud • Uniform precision – entire range• Fewer steps

Baseband signal over compressor uniform quantiser

07/10/2013

2

07/10/2013

3

Why Compression At all??

•OK Reduction in the amount of Memory required …….. Standard Answer !!!!

WHAT ELSE????

Whenever there’s non linearity.......

3 stages of compression scheme 1. Transformation 2. Loss 3. Coding

6.3.1 CODING OF AUDIO

Quantization and Transformation coding of data Mu law technique + Simple algorithm further

compression Difference in signal 1. size of signal values

2. concentrate histogram pixel values variance reduction therefore

lossless compression into shorter bit stream

Pulse Code Modulation – formal term for sampling and Quantisation

07/10/2013

4

Bit rate = bits per sample x the sampling rate

Compressed- 128kbps 192 kbps

256 kbps

Goal – quantized sampled output

07/10/2013

5

“Producing Quantised Sampled output for audio” Decission Boundaries for quantizer input intervals –coder mapping Representative values(reconstruction levels) output from a quantizer – decoder mapping 3 stages of compression scheme 1. Transformation 2. Loss 3. Coding PCM in Speech Compression 50 Hz to about 10kHzbandwidth1. uniform quantization without companding 12 bits bit-rate 240

kbps2. With companding 8 bits bit-rate to 160 kbps.3. standard approach to telephony 4 kHz

4. - sampling rate 8kHz companded bit rate reduced to 64 kbps.

07/10/2013

6

6.3.2 PULSE CODE MODULATION

07/10/2013

7

2 Wrinkeles Sounds up to 4 kHz band limiting filter

Reconstructed after low pass filtering

Original analog

Decoded staircase

Reconstructed signal after low pass filtering

Audio often not in PCM –difference –fewer bits Peaked histogram maximum at zero For example, histogram for a linear ramp signal -

flat, histogram for the derivative of the signal (i.e., the differences, from sampling point to sampling point) consists of a spike at the slope value.

assign short codes to prevalent values and long code words to rarely occurring ones.

07/10/2013

8

DIFFERENTIAL CODING OF AUDIO

predict the next sample as being equal to the current sample

transmitting these using a PCM system. Linear prediction

07/10/2013

9

6.3.4 LOSSLESS PREDICTIVE CODING

1n n

n n n

f f

e f f

2 to 4

1n n k n k

k

f a f

Linear Predictor Function

CONTD…

07/10/2013

10

Digital speech signal. Histogram of digital speech signal values

Histogram of digital speech signal differences.

Problem?What if Exceptionally Large Difference?

SU ,SD ,-32 EG 100 – SU SU SU 4

PREDICTOR EXAMPLE

07/10/2013

11

1 2

1( )

2n n n

n n n

f f f

e f f

ENCODER

DECODER

Calculate f1, f2, f3, f4, f5 = 21, 22, 27, 25, 22.

07/10/2013

12

6.3.5 DPCM

Predictive Coding, except that it incorporates a quantizer step.

1 2 3_ ( , , ,...) ,

,

[ ] ,

transmit ( ) ,

ˆreconstruct : .

n n n n

n n n

n n

n

n n n

f function of f f f

e f f

e Q e

codeword e

f f e

07/10/2013

13

Distortion - Average Squared Error

Lloyd-Max quantizer, which is based on a least-squares minimization of the error term.

For speech, we could modify quantization steps adaptively by estimating the mean and variance of a patch of signal values, and shifting quantization steps accordingly, for every block of signal values. That is, starting at time i we could take a block of N values fn and try to minimize the quantization error:

2

1

[ ( ) ] /N

n nn

f f N

1

2[ ]

i N

n nn i

min f Q f

LLOYD MAX QUANTISER

1. Get Pdf2. Guess M representation Levels3. Apply Threshold Condition4. Apply Mean Square Error Estimation5. Iteration process (of steps 3 n 4)

07/10/2013

14

07/10/2013

15

signal differences peaked, could model them using a Laplacian probability distribution function, which is strongly peaked at zero

for variance σ2.

one assigns quantization steps for a quantizer with nonuniform steps by assuming signal differences, dn are drawn from such a distribution and then choosing steps to minimize....... ?

2( ) (1/ 2 ) ( 2 | | / )x exp x

1

2[ ] ( ).

i N

n n nn i

min d Q d l d

QUANTISATION ERROR

07/10/2013

16

1 2 3_ ( , , ,...) ,

,

[ ] ,

transmit ( ) ,

ˆreconstruct : .

n n n n

n n n

n n

n

n n n

f function of f f f

e f f

e Q e

codeword e

f f e

07/10/2013

17

• Notice that the quantization noise, , is equal to the quantization effect on the error term, .

• Suppose we adopt the particular predictor below:

(1)

so that is an integer.

• As well, use the quantization scheme:

(2)

n nf f

n ne e

1 2ˆn n nf trunc f f

[ ] 16*trunc 255 /16 256 8

ˆ

n n n

n n n

e Q e e

f f e

ˆn n ne f f

07/10/2013

18

en in range Quantized to value

-255 .. -240-239 .. -224

.

.

. -31 .. -16 -15 .. 0 1 .. 16

17 .. 32. . .

225 .. 240241 .. 255

-248-232

.

.

.-24-88

24...

232248

07/10/2013

19

DELTA MODULATION This scheme sends only the difference between

pulses, if the pulse at time tn+1 is higher in amplitude value than the pulse at time tn, then a single bit, say a “1”, is used to indicate the positive value.

If the pulse is lower in value, resulting in a negative value, a “0” is used.

This scheme works well for small changes in signal values between samples.

If changes in amplitude are large, this will result in large errors.

07/10/2013

20

1

1

ˆ ,

ˆ ,

if 0,

ˆ .

n n

n n n n n

nn

n n n

f f

e f f f f

k e where k is a constante

k otherwise

f f e

Solution Sampling at many times greater the Nyquist rate

If the slope of the actual signal curve is high, the staircase approximation cannot keep up. For a steep curve, should change the step size k adaptivelyAdaptive DM

07/10/2013

21

f1 f2 f3 f4

10 11 13 15

11 10f f

1

1

ˆ ,

ˆ ,

if 0,

ˆ .

n n

n n n n n

nn

n n n

f f

e f f f f

k e where k is a constante

k otherwise

f f e

e2 = 11 − 10 = 1,

e3 = 13 − 14 = −1,

e4 = 15 − 10 = 5,

07/10/2013

22

Step Size + Decision Boundaries !

6.3.7 ADPCM

• ADPCM (Adaptive DPCM) takes the idea of adapting the coder to suit the input much farther.

quantizer and the predictor.

1. In Adaptive DM, adapt the quantizer step size to suit the input. In DPCM, we can change the step size as well as decision boundaries, using a non-uniform quantizer.We can carry this out in two ways:

(a) Forward adaptive quantization: use the properties of the input signal.

(b) Backward adaptive quantization: use the properties of the quantized output. If quantized errors become too large, we should change the non-uniform quantizer.

Multimedia Systems (eadeli@iust.ac.ir)

23

2. We can also adapt the predictor, again using forward or backward adaptation. Making the predictor coefficients adaptive is called Adaptive Predictive Coding (APC):

(a) Recall that the predictor is usually taken to be a linear function of previous reconstructed quantized values, .

(b) The number of previous values used is called the “order” of the predictor. For example, if we use M previous values, we need M coefficients ai, i = 1..M in a predictor

(6.22)

24

Multimedia Systems (eadeli@iust.ac.ir)

1

ˆM

n i n ii

f a f

nf

• However we can get into a difficult situation if we try to change the prediction coefficients, that multiply previous quantized values, because that makes a complicated set of equations to solve for these coefficients:

(a) Suppose we decide to use a least-squares approach to solving a minimization trying to find the best values of the ai:

(6.23)

(b) Here we would sum over a large number of samples fn, for the current patch of speech, say. But because depends on the quantization we have a difficult problem to solve. As well, we should really be changing the fineness of the quantization at the same time, to suit the signal’s changing nature; this makes things problematical.

25

2

1

ˆ( )N

n nn

min f f

nf

(c) Instead, one usually resorts to solving the simpler problem that results from using not in the prediction, but instead simply the signal fn itself. Explicitly writing in terms of the coefficients ai, we wish to solve:

(6.24)

Differentiation with respect to each of the ai, and setting to zero, produces a linear system of M equations that is easy to solve. (The set of equations is called the Wiener-Hopf equations.)

26

2

1 1

( )N M

n i n in i

min f a f

nf

Schematic diagram for ADPCM encoder and decoder

27

QUERIES??