ActiveNoiseCacellation

20
COMPARISON OF LMS, NLMS AND RLS ADAPTIVE NOISE ALGORITHMS ECE 539 FINAL PROJECT PRESENTED BY: VENKATA KISHORE KAJULURI [email protected] 101689819 VERSION 1.0 MAY 5, 2015

Transcript of ActiveNoiseCacellation

Page 1: ActiveNoiseCacellation

COMPARISON OF LMS, NLMS AND RLS ADAPTIVE NOISE ALGORITHMS

ECE 539 FINAL PROJECT

PRESENTED BY: VENKATA KISHORE KAJULURI

[email protected] 101689819

VERSION 1.0

MAY 5, 2015

Page 2: ActiveNoiseCacellation

5/5/2015 Comparison of LMS, NLMS and RLS Adaptive noise algorithms 1

COMPARISON OF LMS, NLMS AND RLS ADAPTIVE NOISE ALGORITHMS

ABSTRACT:

The purpose of this project is to study the adaptive filters theory for the noise cancellation problem. Firstly the theory behind the adaptive filters is presented. Secondly three most commonly used adaptive filters, the LMS, NLMS and RLS algorithms are described. The work focuses on one of the classes of application of the adaptive filters: the active noise cancellation problem, presenting a general problem, the three different algorithm solutions and a comparison between them.

1.1 Adaptive Filters :

Adaptive filters are filters with the ability of adaptation to an unknown environment. This family of filters has been widely applied because of its versatility (capable of operating in an unknown system) and low cost (hardware cost of implementation, compared with the non-adaptive filters, acting in the same system).

The ability of operating in an unknown environment added to the capability of tracking time variations of input statistics makes the adaptive filter a powerful device for signal-processing and control applications. Indeed, adaptive filters can be used in numerous applications and they have been successfully utilized over the years.

As it was before mentioned, the applications of adaptive filters are numerous. For that reason, applications are separated in four basic classes: identification, inverse modelling, prediction and interference cancelling.

All the applications above mentioned, have a common characteristic: an input signal is received for the adaptive filter and compared with a desired response, generating an error. That error is then used to modify the adjustable coefficients of the filter, generally called weight, in order to minimize the error and, in some optimal sense, to make that error being optimized, in some cases tending to zero, and in another tending to a desired signal.

1.2 Active Noise Cancelling

The active noise cancelling (ANC), also called adaptive noise cancelling or active noise canceller belongs to the interference cancelling class. The aim of this algorithm, as the aim of any adaptive filter, is to minimise the noise interference or, in an optimum situation, cancel that perturbation. The approach adopted in the ANC algorithm, is to try to imitate the original signal s(n).

In this study, the final objective is to use an ANC algorithm to cancel noise interference, but this algorithm can be employed to deal with any other type of corrupted signal. A scheme of the ANC can be viewed in figure 1.1, depicted below.

Page 3: ActiveNoiseCacellation

5/5/2015 Comparison of LMS, NLMS and RLS Adaptive noise algorithms 2

In the ANC the aim is to minimize the noise interference1 that corrupts the original input signal. In the figure above, the desired signal d(n) is composed by an unknown signal, that we call s(n) corrupted for an additional noise n2(n), generated for the interference. The adaptive filter is then installed in a place that the only input is the interference signal n1(n). The signals n1(n) and n2(n) are correlated. The output of the filter y(n) is compared with the desired signal d(n), generating an error e(n). That error, which is the system output, is used to adjust the variable weights of the adaptive filter in order to minimize the noise interference. In an optimal situation, the output of the system e(n) is composed by the signal s(n), free of the noise interference n2(n).

2. Active Noise Problem:

Noise cancellation methods developed prior to ANC had access to the desired signal. But, for the ANC case, this desired response comes already corrupted for a noise. It makes the problem very complicated to solve using traditional methods.

Analyzing the figure 1.1, it can be seen that the noise n2(n) which corrupts the desired signal s(n) is represented by the same initial characters as the input signal n1(n) . This noise is conveniently represented as shown, to make easier to understand that the noise n1(n) and n2(n) have some similarities. Indeed, the noise n1(n) is the noise source and the noise n2(n) is a secondary noise which is correlated with n1(n). Both of noises are also uncorrelated with the signal s(n), which implies in the following conditions:

Page 4: ActiveNoiseCacellation

5/5/2015 Comparison of LMS, NLMS and RLS Adaptive noise algorithms 3

Where d(n) is the desired signal, E[ * ] is the expectation and p(n) is an unknown correlation between n1(n) and n2(n).

Those necessary conditions can be obtained, for example, if it is installed a sensor sensor1 in a place where only the noise source n1(n) is detected by the sensor. The sensor sensor2 will detect the desired signal d(n), and the noise n2(n) could be correlated with n1(n) because of the delay between them, for example, or by applying a filter.

The error signal e(n), which is also the system output should contain the original signal s(n) in an optimum sense.

The aim of this algorithm is to make the output y(n), which is equal to the filter‟s tap-weight transposed wT(n) times x’(n) (vector formed by the reference noise signal n1(n) having the same size as w(n)), or y(n)= wT(n) x’(n), be equal to the noise n2(n) (y(n) = (n)). Having this equivalence, it is easy to deduce that the error is equal to the desired signal s(n) ( e(n)=d(n)-y(n)=s(n)+n2(n)- n2(n)=s(n).

2.1 LMS Solution

Least mean squares (LMS) algorithms are a class of adaptive filter used to mimic a desired filter by finding the filter coefficients that relate to producing the least mean squares of the error signal (difference between the desired and the actual signal). It is a stochastic gradient descent method in that the filter is only adapted based on the error at the current time.

Algorithm:

𝑝 = 𝑓𝑖𝑙𝑡𝑒𝑟 𝑜𝑟𝑑𝑒𝑟

𝑢 = 𝑠𝑡𝑒𝑝 𝑠𝑖𝑧𝑒

𝑛 = 𝑙𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑡ℎ𝑒 𝑑𝑒𝑠𝑖𝑟𝑒𝑑 𝑠𝑖𝑔𝑛𝑎𝑙

𝑥(𝑛) = 𝑠(𝑛) + 𝑛2(𝑛)

𝑤 = [𝑤(0) = 0, 𝑤(1) = 0, … . . 𝑤(𝑝) = 0]

𝑥(𝑛) = [𝑥(𝑛), 𝑥(𝑛 − 1), . . . , 𝑥(𝑛 − 𝑝 + 1)]𝑇

𝑦(𝑛) = 𝑤(𝑛) ∗ 𝑥(𝑛)

𝑒(𝑛) = 𝑑(𝑛) − 𝑦(𝑛)

𝑤(𝑛 + 1) = 𝑤(𝑛) + 𝑢. 𝑒∗(𝑛). 𝑥(𝑛)

where x(n) is a signal composed by the original signal s(n) added to the noise signal n1(n); d(n) is the desired system output; y(n) is the ANC‟s output; w(n)is the tap-weight vector; x’(n) is the

Page 5: ActiveNoiseCacellation

5/5/2015 Comparison of LMS, NLMS and RLS Adaptive noise algorithms 4

vector formed by the reference noise signal n1(n) having the same size like w(n); e(n) is the error signal and μ is the step-size parameter.

Simulation Results:

Noise Cancellation

Page 6: ActiveNoiseCacellation

5/5/2015 Comparison of LMS, NLMS and RLS Adaptive noise algorithms 5

MSE:

Page 7: ActiveNoiseCacellation

5/5/2015 Comparison of LMS, NLMS and RLS Adaptive noise algorithms 6

2.2 NLMS Solution

In the LMS algorithm studied in the last section, the tap-weight input has a correction 𝑢. 𝑒∗(𝑛). 𝑥(𝑛) which is directly proportional to the size of x(n).

When the size of the x(n) is large, the LMS algorithm experiences a gradient noise amplification problem. In order to solve this problem, the normalized least-mean-square (NLMS) algorithm was developed.

The increase of the input x(n) makes very difficult (if not impossible) to choose a μ that guarantees the algorithm‟s stability. Therefore, the NLMS has variable step-size parameter given by:

where δ is a small constant, mu’ is the step size parameter of the NLMS 0<mu’<2 and || * || is the Euclidean norm.

Algorithm:

𝑝 = 𝑓𝑖𝑙𝑡𝑒𝑟 𝑜𝑟𝑑𝑒𝑟

𝑢 = 𝑠𝑡𝑒𝑝 𝑠𝑖𝑧𝑒

𝑤 = [𝑤(0) = 0, 𝑤(1) = 0, 𝑤(2) = 0, … . . 𝑤(𝑛) = 0]

𝑥(𝑛) = [𝑥(𝑛), 𝑥(𝑛 − 1), 𝑥(𝑛 − 2), . . . , 𝑥(𝑛 − 𝑝 + 1)]𝑇

𝑠(𝑛) = 𝑤(𝑛) ∗ 𝑥(𝑛)

𝑒(𝑛) = 𝑑(𝑛) − 𝑠(𝑛)

𝑤(𝑛 + 1) = 𝑤(𝑛) + 𝑢[1 𝑥(𝑛)𝑥𝑇(𝑛)⁄ ] 𝑒∗(𝑛)𝑥(𝑛)]

Page 8: ActiveNoiseCacellation

5/5/2015 Comparison of LMS, NLMS and RLS Adaptive noise algorithms 7

Simulation Results:

Page 9: ActiveNoiseCacellation

5/5/2015 Comparison of LMS, NLMS and RLS Adaptive noise algorithms 8

MSE:

Page 10: ActiveNoiseCacellation

5/5/2015 Comparison of LMS, NLMS and RLS Adaptive noise algorithms 9

2.2 RLS Solution

Contrary to the LMS algorithm, whose aim is to reduce the mean square error, the recursive least-squares algorithm‟s (RLS) objective is to find, recursively, the filter coefficients that minimize the least square cost function. The RLS algorithm has as an advantage a fast convergence, but on the other hand, it has the problem of a high computational complexity.

The cost function of this algorithm is the weighted least-squares (WLS), given by:

where 0<lambda<1 is called “forgetting factor”.

Algorithm:

𝑝 = 𝑓𝑖𝑙𝑡𝑒𝑟 𝑜𝑟𝑑𝑒𝑟

𝜆 = 𝑓𝑜𝑟𝑔𝑒𝑡𝑡𝑖𝑛𝑔 𝑓𝑎𝑐𝑡𝑜𝑟

𝛿 = 𝑣𝑎𝑙𝑢𝑒 𝑡𝑜 𝑖𝑛𝑖𝑡𝑖𝑎𝑙𝑖𝑠𝑒 𝑃(0)

𝑛 = 𝑙𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑡ℎ𝑒 𝑑𝑒𝑠𝑖𝑟𝑒𝑒𝑑 𝑠𝑖𝑔𝑛𝑎𝑙

𝑤 = [𝑤(0) = 0, 𝑤(1) = 0, … . . 𝑤(𝑝) = 0]

𝑥(𝑛) = [𝑥(𝑛), 𝑥(𝑛 − 1), . . . , 𝑥(𝑛 − 𝑝 + 1)𝑇]

𝑥(𝑛) = 𝑠(𝑛) ∗ 𝑤(𝑛 − 1)

𝑒(𝑛) = 𝑑(𝑛) − 𝑠(𝑛)

𝑔(𝑛) = 𝑝(𝑛 − 1). 𝑥(𝑛){𝜆 + 𝑥𝑇(𝑛). 𝑝(𝑛 − 1). 𝑥(𝑛)}−1

𝑝(𝑛) = 𝜆−1. 𝑝(𝑛 − 1) − 𝑔(𝑛). 𝑥𝑇(𝑛). 𝜆−1. 𝑝(𝑛 − 1)

𝑤(𝑛) = 𝑤(𝑛 − 1) + 𝑒(𝑛). 𝑔(𝑛)

Choosing 𝜆:

The smaller 𝜆 is, the smaller contribution of the previous samples. This makes the filter more sensitive to previous samples, which means more fluctuations in the filter co-efficients. The 𝜆 =1 case is referred to as the growing RLS algorithm. In practice , 𝜆 is usually chosen between 0.98 and 1.

Page 11: ActiveNoiseCacellation

5/5/2015 Comparison of LMS, NLMS and RLS Adaptive noise algorithms 10

Simulation:

Noise cancellation:

Page 12: ActiveNoiseCacellation

5/5/2015 Comparison of LMS, NLMS and RLS Adaptive noise algorithms 11

MSE:

Page 13: ActiveNoiseCacellation

5/5/2015 Comparison of LMS, NLMS and RLS Adaptive noise algorithms 12

Summary:

Computations (LMS):

Computations (NLMS):

Page 14: ActiveNoiseCacellation

5/5/2015 Comparison of LMS, NLMS and RLS Adaptive noise algorithms 13

Computations (RLS):

Conclusion:

The study proves that the RLS algorithm is more robust than the LMS algorithm, having a smaller error and a faster convergence for the case of the white Gaussian noise interference. The RLS algorithm has a bigger complexity and computational cost, but depending on the quality required for the ANC device, it is the best solution to be adopted

Page 15: ActiveNoiseCacellation

5/5/2015 Comparison of LMS, NLMS and RLS Adaptive noise algorithms 14

Code:

LMS Algorithm:

Function:

function [desired,w,MSE,e]=LMSfunction(order,mu,ref,primary,N,signal) R = zeros(order,1); desired = zeros(order,1); w = zeros(order,1); % Step-size % LMS Algorithm for i=1:N R=[ref(i);R(1:length(R)-1)]; y(i)=w'*R; desired(i) = primary(i)-y(i); e(i) = desired(i); w= w+mu*R*conj(e(i)); end size(e) MSE = e' - signal(1:length(e)); MSE = MSE.^2;

end

Functionl call:

clear all close all clc order = 5; t = [0:1/40000:5]; % Time Range: 0-5 seconds [signal,fs]=audioread('flute.wav');

x=wgn(length(signal),1,0); a=100 ; b=[2 2]; noise=filter(b,a,x); noise=noise*2; ref=noise; N = length(signal); primary = signal+noise; % Add the signal and noise together

N = length(signal); mu=0.01 [desired1,w1,MSE1,e1]=LMSfunction(order,mu,ref,primary,N,signal); mu=0.05 [desired2,w2,MSE2,e2]=LMSfunction(order,mu,ref,primary,N,signal); mu=0.1 [desired3,w3,MSE3,e3]=LMSfunction(order,mu,ref,primary,N,signal); mu=0.001 [desired4,w4,MSE,e4]=LMSfunction(order,mu,ref,primary,N,signal); figure(1); subplot(4,1,1);

Page 16: ActiveNoiseCacellation

5/5/2015 Comparison of LMS, NLMS and RLS Adaptive noise algorithms 15

plot(MSE); xlabel('N') ylabel('Error') title('Mean squared error for mu=0.001')

%plot(t(1:length(MSE)), MSE) subplot(4,1,2); plot(MSE1); xlabel('N') ylabel('Error') title('Mean squared error for mu=0.01')

subplot(4,1,3); plot(MSE2); xlabel('N') ylabel('Error') title('Mean squared error for mu=0.05')

subplot(4,1,4); plot(MSE3); xlabel('N') ylabel('Error') title('Mean squared error for mu=0.1')

figure(2); subplot(4,1,1); plot(signal); title('original signal'); subplot(4,1,2); plot(noise); title('White Gaussian Noise'); subplot(4,1,3); plot(primary); title('Signal + Noise'); subplot(4,1,4); plot(e3); title('e(n) resembling desired signal');

sound(signal, fs); pause (5); sound(primary, fs); pause (5); sound(ref, fs) pause (5); sound(desired2, fs)

Page 17: ActiveNoiseCacellation

5/5/2015 Comparison of LMS, NLMS and RLS Adaptive noise algorithms 16

NLMS Algorithm

Function:

function [desired,w,MSE,e]=NLMSfunction(order,alfa,ref,primary,N,signal) R = zeros(order,1); desired = zeros(order,1); c=0.001 w = zeros(order,1); % Step-size % LMS Algorithm for i=1:N R=[ref(i);R(1:length(R)-1)]; y(i)=w'*R; desired(i) = primary(i)-y(i); e(i) = desired(i); mu=alfa/(c+R'*R); w= w+mu*R*conj(e(i)); end size(e) MSE = e' - signal(1:length(e)); MSE = MSE.^2;

end

Function Call:

clear all close all clc order = 5; t = [0:1/40000:5]; % Time Range: 0-5 seconds [signal,fs]=audioread('flute.wav');

x=wgn(length(signal),1,0); a=100 ; b=[2 2]; noise=filter(b,a,x); noise=noise*2; ref=noise; N = length(signal); primary = signal+noise; % Add the signal and noise together

N = length(signal); alfa=0.005 [desired1,w1,MSE1,e1]=NLMSfunction(order,alfa,ref,primary,N,signal); alfa=0.001 [desired2,w2,MSE2,e2]=NLMSfunction(order,alfa,ref,primary,N,signal); alfa=0.0001 [desired3,w3,MSE3,e3]=NLMSfunction(order,alfa,ref,primary,N,signal); alfa=0.01 [desired4,w4,MSE,e4]=NLMSfunction(order,alfa,ref,primary,N,signal); figure(1); subplot(4,1,1);

Page 18: ActiveNoiseCacellation

5/5/2015 Comparison of LMS, NLMS and RLS Adaptive noise algorithms 17

plot(MSE); xlabel('N') ylabel('Error') title('Mean squared error for alfa=0.01')

%plot(t(1:length(MSE)), MSE) subplot(4,1,2); plot(MSE1); xlabel('N') ylabel('Error') title('Mean squared error for alfa=0.005')

subplot(4,1,3); plot(MSE2); xlabel('N') ylabel('Error') title('Mean squared error for alfa=0.001')

subplot(4,1,4); plot(MSE3); xlabel('N') ylabel('Error') title('Mean squared error for alfa=0.0001')

figure(2); subplot(4,1,1); plot(signal); title('original signal'); subplot(4,1,2); plot(noise); title('White Gaussian Noise'); subplot(4,1,3); plot(primary); title('Signal + Noise'); subplot(4,1,4); plot(e3); title('e(n) resembling desired signal');

%sound(signal, fs); %pause (5); %sound(primary, fs); %pause (5); %sound(ref, fs) %pause (5); %sound(desired2, fs)

Page 19: ActiveNoiseCacellation

5/5/2015 Comparison of LMS, NLMS and RLS Adaptive noise algorithms 18

RLS Algorithm

Function:

function [e,w,MSE]=RLSFunction(order,delta,ref,primary,N,signal) lambda = 1; w=zeros(order,1); M=order; P=eye(order)/delta; for n=M:N uvec=ref(n:-1:n-M+1); k=lambda^(-1)*P*uvec/(1+lambda^(-1)*uvec'*P*uvec); e(n)=primary(n)-w'*uvec; w=w+k*conj(e(n)); P=lambda^(-1)*P-lambda^(-1)*k*uvec'*P; % Calculate Mean Squared Error MSE = e' - signal(1:length(e)); MSE = MSE.^2; % Time

end

Function call:

clear all close all clc order = 5; t = [0:1/40000:5]; % Time Range: 0-5 seconds [signal,fs]=audioread('flute.wav');

x=wgn(length(signal),1,0); a=100 ; b=[2 2]; noise=filter(b,a,x); noise=noise*2; ref=noise; N = length(signal); primary = signal+noise; % Add the signal and noise together

N = length(signal); delta=0.5 [e1,w1,MSE1]=RLSFunction(order,delta,ref,primary,N,signal); delta=1 [e2,w2,MSE2]=RLSFunction(order,delta,ref,primary,N,signal); delta=2 [e3,w3,MSE3]=RLSFunction(order,delta,ref,primary,N,signal); delta=3 [e4,w4,MSE4]=RLSFunction(order,delta,ref,primary,N,signal); figure(1); subplot(4,1,1); plot(MSE1); xlabel('N') ylabel('Error') title('Mean squared error for delta=0.5')

Page 20: ActiveNoiseCacellation

5/5/2015 Comparison of LMS, NLMS and RLS Adaptive noise algorithms 19

%plot(t(1:length(MSE)), MSE) subplot(4,1,2); plot(MSE2); xlabel('N') ylabel('Error') title('Mean squared error for delta=1')

subplot(4,1,3); plot(MSE3); xlabel('N') ylabel('Error') title('Mean squared error for delta=2')

subplot(4,1,4); plot(MSE4); xlabel('N') ylabel('Error') title('Mean squared error for delta=3')

figure(2); subplot(4,1,1); plot(signal); title('original signal'); subplot(4,1,2); plot(noise); title('White Gaussian Noise'); subplot(4,1,3); plot(primary); title('Signal + Noise'); subplot(4,1,4); plot(e3); title('e(n) resembling desired signal');

%sound(signal, fs); %pause (5); %sound(primary, fs); %pause (5); %sound(ref, fs) %pause (5); %sound(desired2, fs)