Fast exact variable order affine projection algorithm

7
Fast exact variable order affine projection algorithm $ Miguel Ferrer n , Alberto Gonzalez, Maria de Diego, Gema Pin ˜ero Audio and Communications Signal Processing Group (GTAC), Institute of Telecommunications and Multimedia Applications (iTEAM), Universitat Polit ecnica de Val encia, Spain article info Article history: Received 1 August 2011 Received in revised form 7 March 2012 Accepted 9 March 2012 Available online 16 March 2012 Keywords: Adaptive filters Affine projection algorithm Fast algorithm Computational complexity Efficient matrix inversion abstract Variable order affine projection algorithms have been recently presented to be used when not only the convergence speed of the algorithm has to be adjusted but also its com- putational cost and its final residual error. These kind of affine projection (AP) algorithms improve the standard AP algorithm performance at steady state by reducing the residual mean square error. Furthermore these algorithms optimize computational cost by dynamically adjusting their projection order to convergence speed requirements. The main cost of the standard AP algorithm is due to the matrix inversion that appears in the coefficient update equation. Most efforts to decrease the computational cost of these algorithms have focused on the optimization of this matrix inversion. This paper deals with optimization of the computational cost of variable order AP algorithms by recursive calculation of the inverse signal matrix. Thus, a fast exact variable order AP algorithm is proposed. Exact iterative expressions to calculate the inverse matrix when the algorithm projection order either increases or decreases are incorporated into a variable order AP algorithm leading to a reduced complexity implementation. The simulation results show the proposed algorithm performs similarly to the variable order AP algorithms and it has a lower computational complexity. & 2012 Elsevier B.V. All rights reserved. 1. Introduction The affine projection (AP) algorithm [1,2] shows better convergence speed than the least mean square (LMS) algorithm [3] and it is simple, robust and stable. The efficiency of AP algorithms has been reported in a variety of applications, such as active noise control [4], acoustic equalization [5] and echo cancellation [6]. The behavior of the AP is mainly determined by a parameter called projec- tion order, N. The AP algorithm behaves similarly to the normalized LMS algorithm [3] when N¼ 1 and to the recursive least squares (RLS) adaptive algorithm [7] when N increases. Therefore the AP shows slow convergence and little residual error when N is small, and fast convergence and higher residual error for large values of N. Variable step-size affine projection algorithms have already been proposed [811] to overcome this duality and achieve better performance in steady state without penalizing the speed of adaptation of the algorithm. Although these strategies achieve better final error in steady state, their computational cost remains invariant throughout algorithm execution and depends mainly on its projection order. A possible improvement to overcome these drawbacks is the adaptation of the projection order in response to algorithm performance. Thus, some variable order AP algorithms [1214] have been developed recently in order to dynami- cally adjust their projection order to convergence speed needs, and decrease the computational cost of the algo- rithm and its residual error. However this promising improvement of the AP algorithm has still some perfor- mance points to be analyzed such as its computational cost, which involves developing its fast versions. Contents lists available at SciVerse ScienceDirect journal homepage: www.elsevier.com/locate/sigpro Signal Processing 0165-1684/$ - see front matter & 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.sigpro.2012.03.007 $ Partially supported by TEC2009-13741, PROMETEO 2009/0013, GV/ 2010/027, ACOMP/2010/006 and UPV PAID-06-09. n Corresponding author. Tel.: þ34 96 3877007x88272; fax: þ34 96 3877309. E-mail address: [email protected] (M. Ferrer). Signal Processing 92 (2012) 2308–2314

Transcript of Fast exact variable order affine projection algorithm

Page 1: Fast exact variable order affine projection algorithm

Contents lists available at SciVerse ScienceDirect

Signal Processing

Signal Processing 92 (2012) 2308–2314

0165-16

doi:10.1

$ Par

2010/02n Corr

387730

E-m

journal homepage: www.elsevier.com/locate/sigpro

Fast exact variable order affine projection algorithm$

Miguel Ferrer n, Alberto Gonzalez, Maria de Diego, Gema Pinero

Audio and Communications Signal Processing Group (GTAC), Institute of Telecommunications and Multimedia Applications (iTEAM),

Universitat Polit�ecnica de Val�encia, Spain

a r t i c l e i n f o

Article history:

Received 1 August 2011

Received in revised form

7 March 2012

Accepted 9 March 2012Available online 16 March 2012

Keywords:

Adaptive filters

Affine projection algorithm

Fast algorithm

Computational complexity

Efficient matrix inversion

84/$ - see front matter & 2012 Elsevier B.V. A

016/j.sigpro.2012.03.007

tially supported by TEC2009-13741, PROMET

7, ACOMP/2010/006 and UPV PAID-06-09.

esponding author. Tel.: þ34 96 3877007x8

9.

ail address: [email protected] (M. Ferrer)

a b s t r a c t

Variable order affine projection algorithms have been recently presented to be used when

not only the convergence speed of the algorithm has to be adjusted but also its com-

putational cost and its final residual error. These kind of affine projection (AP) algorithms

improve the standard AP algorithm performance at steady state by reducing the residual

mean square error. Furthermore these algorithms optimize computational cost by

dynamically adjusting their projection order to convergence speed requirements.

The main cost of the standard AP algorithm is due to the matrix inversion that

appears in the coefficient update equation. Most efforts to decrease the computational

cost of these algorithms have focused on the optimization of this matrix inversion. This

paper deals with optimization of the computational cost of variable order AP algorithms

by recursive calculation of the inverse signal matrix. Thus, a fast exact variable order AP

algorithm is proposed. Exact iterative expressions to calculate the inverse matrix when

the algorithm projection order either increases or decreases are incorporated into a

variable order AP algorithm leading to a reduced complexity implementation. The

simulation results show the proposed algorithm performs similarly to the variable order

AP algorithms and it has a lower computational complexity.

& 2012 Elsevier B.V. All rights reserved.

1. Introduction

The affine projection (AP) algorithm [1,2] shows betterconvergence speed than the least mean square (LMS)algorithm [3] and it is simple, robust and stable. Theefficiency of AP algorithms has been reported in a varietyof applications, such as active noise control [4], acousticequalization [5] and echo cancellation [6]. The behavior ofthe AP is mainly determined by a parameter called projec-tion order, N. The AP algorithm behaves similarly to thenormalized LMS algorithm [3] when N¼1 and to therecursive least squares (RLS) adaptive algorithm [7] whenN increases. Therefore the AP shows slow convergence and

ll rights reserved.

EO 2009/0013, GV/

8272; fax: þ34 96

.

little residual error when N is small, and fast convergenceand higher residual error for large values of N. Variablestep-size affine projection algorithms have already beenproposed [8–11] to overcome this duality and achievebetter performance in steady state without penalizing thespeed of adaptation of the algorithm. Although thesestrategies achieve better final error in steady state, theircomputational cost remains invariant throughout algorithmexecution and depends mainly on its projection order. Apossible improvement to overcome these drawbacks is theadaptation of the projection order in response to algorithmperformance. Thus, some variable order AP algorithms[12–14] have been developed recently in order to dynami-cally adjust their projection order to convergence speedneeds, and decrease the computational cost of the algo-rithm and its residual error. However this promisingimprovement of the AP algorithm has still some perfor-mance points to be analyzed such as its computational cost,which involves developing its fast versions.

Page 2: Fast exact variable order affine projection algorithm

Fig. 1. Basic adaptive system identification scheme.

M. Ferrer et al. / Signal Processing 92 (2012) 2308–2314 2309

Despite its computational cost, the AP algorithm can beconsidered good enough [15] in comparison with otheralgorithms that exhibit similar performance like the RLSalgorithm. Many efforts have been made to decrease itscomputational cost as described in [16–21]. However thestrategies based on approximations or models used todecrease the computational cost of the algorithm can slightlyworsen the algorithm performance in some cases. Therefore,this paper avoids these methods and focuses on the efficientcalculation of the inverse signal matrix that appears withinthe algorithm update equations of all the variable orderAP algorithms. By using this method, efficient approachesare obtained that behave exactly like the original non-fastversions when an accurate initial value of the inverse matrixis provided and show a significant reduction of theircomputational cost. Variable order AP algorithms can even-tually change their projection order between iterations,therefore a recursive method to calculate the inverse matrixhas to consider the inverse matrix updates from a previousinverse matrix of different size. Among the different variableorder AP algorithms available and to illustrate the perfor-mance of the efficient method introduced, the authors haveused the variable order AP (VAP) algorithm. The applicationof the fast recursive method to the VAP provides the fastexact variable order AP (FExVAP). On the other hand, theefficient computation of the matrix inversion can be appliedto other variable order AP algorithms such as the evolvingorder AP (E-AP) algorithm [13]. Thus, some simulationresults of the E-AP and of its fast exact approach, the FExE-AP, which uses the proposed fast exact inversion method,have been also carried out.

Section 2 briefly describes the AP algorithm and thefoundation of its variable order versions, and a recursivemethod to calculate the inverse signal matrix from itsprevious values for the VAP algorithm is developed inSection 3. The simulation results are presented in Section 4,comparing the VAP, the E-AP, their fast exact approaches(FExVAP and FExE-AP, respectively) and the original APalgorithm. The reduction of the computational cost interms of number of multiplications is also presented inSection 4. Finally conclusions are summarized in Section 5.

2. The variable order affine projection algorithm

The AP algorithm attempts to generate a version of anunknown signal, d(n), by filtering a reference signal, x(n),correlated with d(n). Fig. 1 illustrates an example of systemidentification where x(n) and d(n) are related through atransversal adaptive filter. The adaptive filter coefficientsare updated by the following equation [22] for a projectionorder N,

wLðnÞ ¼wLðn�1ÞþmATðnÞ½AðnÞAT

ðnÞþdI��1eNðnÞ, ð1Þ

where I represents the N�N identity matrix, wLðnÞ is avector that comprises the L adaptive filter coefficients andmatrix AðnÞ of N� L size is defined as

ATðnÞ ¼ ½xLðnÞ xLðn�1Þ � � � xLðn�Nþ1Þ�, ð2Þ

with xTL ðnÞ ¼ ½xðnÞ xðn�1Þ � � � xðn�Lþ1Þ� and eNðnÞ is given

by

eNðnÞ ¼ dNðnÞ�AðnÞwLðn�1Þ, ð3Þ

with

dTNðnÞ ¼ ½dðnÞ dðn�1Þ � � � dðn�Nþ1Þ�: ð4Þ

Constants m and d are called, respectively, the convergenceand the regularization parameters [22].

Variable order AP algorithms use the update equation (1)but their projection order can change between iterations.This projection order varies in order to speed up convergencespeed and minimize computational cost and residual errordepending on certain conditions that can differ slightlybetween different variable order AP approaches. For instance,the evolving order AP described in [13] uses the instanta-neous value of the residual error signal power to update theprojection order and keep a single m. Even though this lead toimprovement due to the change in projection order, as ageneral rule, it does not achieve optimum residual error inthe steady state since this residual error depends on both mand the projection order. Alternatively, the AP approach usedin this paper and named variable order AP (VAP), changes theprojection order when the, also variable, convergence para-meter m exceeds given maximum or minimum values.Therefore, the algorithm described (VAP) changes both thestep-size parameter and the projection order (a similar APalgorithm for echo cancellation that changes both parameterspresented in [23]). Thus,

Nðnþ1Þ ¼

minfNðnÞþ1,Nmaxg, mðnÞ4mmax � mNup

NðnÞ, other

maxfNðnÞ�1;1g, mðnÞommax � mNdown,

8><>:ð5Þ

where Nmax is the higher projection order and mNup andmNdown the maximum and minimum thresholds relative tommax. Due to the simplicity of the algorithm, these para-meters are selected depending on the objectives. In this way,if the goal is to perform well at transient state we need a lowmNdown value, whereas a low mNup value provides goodtracking capabilities. On the other hand, high thresholdvalues are required to obtain a good steady-state behavior.The maximum step-size parameter mmax in (5) is chosen to

Page 3: Fast exact variable order affine projection algorithm

M. Ferrer et al. / Signal Processing 92 (2012) 2308–23142310

guarantee both fast convergence speed and filter stability andideally should be less than 1 [24,25].

The variation rule for the convergence parameter can bechosen by attempting to ensure that the mean squaredeviation of the filter weights undergoes the largestdecrease between algorithm iterations and it is given by [8]

mðnÞ ¼ mmax

JpðnÞJ2

JpðnÞJ2þC

, ð6Þ

where pðnÞ is an estimation of the mean value ofATðnÞ½AðnÞAT

ðnÞþdI��1eNðnÞ, which is obtained from anexponential weighting of its instantaneous value as

pðnÞ ¼ apðn�1Þþð1�aÞATðnÞ½AðnÞAT

ðnÞþdI��1eNðnÞ ð7Þ

with 0oao1, and C is a positive parameter that dependson the algorithm projection order. mðnÞ is equivalent to theconstant m parameter in (1).

Moreover, inversion of the RNðnÞmatrix, being RNðnÞ ¼

AðnÞATðnÞþdI, requires OðN3=2Þmultiplications, which can

represent the costlier part of the algorithm. The size ofthis matrix changes dynamically in variable order APalgorithms. There are methods to recursively calculatethis matrix inversion with a much lower cost but theyhave to be extended to the variable order approach, whichmeans considering cases when the projection order eitherincreases or decreases between algorithm iterations.

3. Efficient matrix inversion

As noted above, the costlier computational cost of theAP algorithm is initially due to the computation of thematrix RNðnÞ and its inversion, which is given byLN2þOðN3=2Þ multiplications. However, AðnÞAT

ðnÞ can becomputed recursively as

AðnÞATðnÞ ¼MðnÞ ¼Mðn�1ÞþxNðnÞx

TNðnÞ�xNðn�LÞxT

Nðn�LÞ,

ð8Þ

using 2N2 multiplications, thus this cost is reduced to2N2þOðN3=2Þ multiplications. In order to further reduce

this computational cost, recursive algorithms to calculateR�1

N ðnÞ from the matrix in the previous iterations,R�1

N ðn�1Þ, can be used. Nevertheless, recursive algorithmsto calculate either R�1

N�1ðnÞ or R�1Nþ1ðnÞ from the previous

values with different projection order, that is fromR�1

N ðn�1Þ, also have to be developed in order to deal withthe recent variable order versions of the AP algorithm. InAlgorithm 1 the different cases discussed below for theFExVAP algorithm are summarized.

Algorithm 1. FExVAP algorithm.

Input: Reference signal x(n), matrix R�1N ðn�1Þ, and vectors rNðn�1Þ

and rN�1ðn�1Þ

Output: R�1N ðnÞ at time n

1:

if Nðn�1Þ ¼NðnÞ then 2: Update the vectors xNðnÞ and xN ðn�LÞ

3:

a1ðnÞ ¼ R�1N ðn�1ÞxNðnÞ

4:

Q�1N ðnÞ ¼ R�1

N ðn�1Þ�a1ðnÞaT1ðnÞ=½1þxT

N ðnÞa1ðnÞ�

5:

bðnÞ ¼Q�1N ðnÞxN ðn�LÞ

6:

R�1N ðnÞ ¼Q�1

N ðnÞþbðnÞbTðnÞ=½1�xT

N ðn�LÞbðnÞ�

7:

else if Nðn�1Þ ¼NðnÞþ1 then 8: Update the vectors xN�1ðnÞ and xN�1ðn�LÞ

9:

Compute R�1N ðnÞ as in the case Nðn�1Þ ¼NðnÞ

10:

Derive ðRÞ�1N ðnÞ from R�1

N ðnÞ

11:

rN�1ðnÞ ¼ rN�1ðn�1ÞþxN�1ðnÞxðn�Nþ1Þ

�xN�1ðn�LÞxðn�N�Lþ1Þ

12:

rN�1ðnÞ ¼ rN�1ðn�1Þþx2ðn�Nþ1Þ�x2ðn�N�Lþ1Þ

13:

a2ðnÞ ¼ ðRÞ�1N ðnÞrN�1ðnÞ

14:

R�1N�1ðnÞ ¼ ðRÞ

�1N ðnÞ�a2ðnÞ½rN�1ðnÞþrT

N�1ðnÞa2ðnÞ��1aT

2ðnÞ

15:

else if Nðn�1Þ ¼NðnÞ�1 then 16: Update the vectors xNðnÞ and xN ðn�L�1Þ

17:

rNðnÞ ¼ rN ðn�1ÞþxNðn�1ÞxðnÞ�xNðn�L�1Þxðn�LÞ

18:

a3ðnÞ ¼ R�1N ðn�1ÞrN ðnÞ

19:

rN ðnÞ ¼ rN ðn�1Þþx2ðnÞ�x2ðn�LÞ

20:

aðnÞ ¼ rN ðnÞ�aT3ðnÞrN ðnÞ

21:

ba3ðnÞ ¼ ½1,�aT3ðnÞ�

T

22:

R�1Nþ1ðnÞ ¼ ð

00

0T

R�1N ðn�1Þ

Þþba3ðnÞbaT3ðnÞ=aðnÞ

23:

end if

3.1. Iterative calculation of R�1N ðnÞ from R�1

N ðn�1Þ

The iterative calculation of R�1N ðnÞ from R�1

N ðn�1Þ issimilar to the matrix inversion used in the sliding windowRLS algorithm [7]. An equivalent method can be found in[26] or [27].

From (8) the following equations can be given:

RNðnÞ ¼RNðn�1ÞþxNðnÞxTNðnÞ�xNðn�LÞxT

Nðn�LÞ

¼Q NðnÞ�xNðn�LÞxTNðn�LÞ ð9Þ

with

Q NðnÞ ¼ RNðn�1ÞþxNðnÞxTNðnÞ, ð10Þ

and xTNðnÞ ¼ ½xðnÞ xðn�1Þ � � � xðn�Nþ1Þ�.

We can carry out the following matrix identification in (9)

1.

C¼ RNðnÞ

2.

H�1¼Q NðnÞ

3.

U¼ xNðn�LÞ

4.

W¼�1

and then apply the matrix inversion lemma (see AppendixA) to calculate R�1

N ðnÞ as

R�1N ðnÞ ¼Q�1

N ðnÞþbðnÞ½1�xTNðn�LÞbðnÞ��1bT

ðnÞ, ð11Þ

where bðnÞ ¼Q�1N ðnÞxNðn�LÞ.

The matrix inversion lemma can be used again tocalculate Q�1

N ðnÞ, using in (10)

1.

C¼Q NðnÞ

2.

H�1¼RNðn�1Þ

3.

U¼ xNðnÞ

4.

W¼ 1

Thus

Q�1N ðnÞ ¼ R�1

N ðn�1Þ�a1ðnÞ½1þxTNðnÞa1ðnÞ�

�1aT1ðnÞ, ð12Þ

where a1ðnÞ ¼R�1N ðn�1ÞxNðnÞ. Therefore, a recursive algo-

rithm to calculate R�1N ðnÞ from R�1

N ðn�1Þ is finally sum-marized in Algorithm 1.

In this way 4ðN2þNÞ multiplications are needed to

obtain R�1N ðnÞ using the above fast exact strategy, thereby

reducing the original cost mainly for high projection orders.

Page 4: Fast exact variable order affine projection algorithm

M. Ferrer et al. / Signal Processing 92 (2012) 2308–2314 2311

3.2. Iterative calculation of R�1N�1ðnÞ from R�1

N ðn�1Þ

This calculation is made in two steps. First R�1N ðnÞ from

R�1N ðn�1Þ is calculated by using the matrix inversion

lemma described in Section 3.1. Then, a simple relationallows to calculate R�1

N�1ðnÞ from R�1N ðnÞ. To achieve this,

matrix RNðnÞ can be rewritten as

RNðnÞ ¼RN�1ðnÞ rN�1ðnÞ

rTN�1ðnÞ rN�1ðnÞ

!, ð13Þ

where rN�1ðnÞ and rN�1ðnÞ can be also recursively calcu-lated by

rN�1ðnÞ ¼ rN�1ðn�1ÞþxN�1ðnÞxðn�Nþ1Þ

�xN�1ðn�LÞxðn�N�Lþ1Þ ð14Þ

and

rN�1ðnÞ ¼ rN�1ðn�1Þþx2ðn�Nþ1Þ�x2ðn�N�Lþ1Þ: ð15Þ

From (B.1) in Appendix B and by using the followingidentification statements,

1.

A11 ¼RN�1ðnÞ

2.

A12 ¼ rN�1ðnÞ

3.

A21 ¼ rTN�1ðnÞ

4.

A22 ¼ rN�1ðnÞ

5.

F�111 ¼ ðRÞ

�1N ðnÞ, which comprises N�1� N�1 upper left

elements of R�1N ðnÞ,

we can rewrite the inverse of (13) as

R�1N ðnÞ

¼

ðRÞ�1N ðnÞ �ðRÞ�1

N ðnÞrN�1ðnÞr�1N�1ðnÞ

�r�1N�1ðnÞr

TN�1ðnÞðRÞ

�1N ðnÞ r�1

N�1ðnÞþr�1N�1ðnÞr

TN�1ðnÞ

ðRÞ�1N ðnÞrN�1ðnÞr

�1N�1ðnÞ

0BB@1CCA:

ð16Þ

Note that R�1N ðnÞ has been previously calculated as in (11)

by using the matrix inversion lemma.Finally, the matrix inversion lemma can be applied

again to calculate R�1N�1ðnÞ from ðRÞ�1

N ðnÞ and the previouslycalculated values of rN�1ðnÞ and rN�1ðnÞ. This method isdescribed as follows:

1.

a2ðnÞ ¼ ðRÞ�1N ðnÞrN�1ðnÞ

2.

R�1N�1ðnÞ ¼ ðRÞ

�1N ðnÞ�a2ðnÞ½rN�1ðnÞþrT

N�1ðnÞa2ðnÞ��1aT

2ðnÞ.

The number of multiplications needed to calculatematrix R�1

N�1ðnÞ from R�1N ðn�1Þ includes: the computation

of R�1N ðnÞ, which has a cost of 4ðNþN2

Þmultiplications, thecalculation of rN�1ðnÞ from (14) and rN�1ðnÞ from (15),which requires 2N multiplications, and finally the applica-tion of the matrix inversion lemma with a cost of 2N2

�2N

multiplications. Therefore, the total number of multiplica-tions required is 6N2

þ4N.

3.3. Iterative calculation of R�1Nþ1ðnÞ from R�1

N ðn�1Þ

We can consider in this case that

RNþ1ðnÞ ¼rNðnÞ rT

NðnÞ

rNðnÞ RNðn�1Þ

!ð17Þ

where rN(n) and rNðnÞ can be recursively calculated as

rNðnÞ ¼ rNðn�1Þþx2ðnÞ�x2ðn�LÞ ð18Þ

and

rNðnÞ ¼ rNðn�1ÞþxNðn�1ÞxðnÞ�xNðn�L�1Þxðn�LÞ: ð19Þ

Let us define a3ðnÞ ¼R�1N ðn�1ÞrNðnÞ and, making use

again of expressions (B.1) and (B.2), it follows that

aðnÞ ¼ rNðnÞ�aT3ðnÞrNðnÞ: ð20Þ

Since R�1N ðn�1Þ is a symmetric matrix, (17) can be rewrit-

ten as

R�1Nþ1ðnÞ ¼

1=aðnÞ �aT3ðnÞ=aðnÞ

�a3ðnÞ=aðnÞ R�1N ðn�1Þþa3ðnÞaT

3ðnÞ=aðnÞ

!

¼0 0T

0 R�1N ðn�1Þ

!þba3ðnÞbaT

3ðnÞ=aðnÞ, ð21Þ

with ba3ðnÞ ¼ ½1,�aT3ðnÞ�

T and 0 is a zero column vector ofsize N.

Thus, it can be calculated R�1Nþ1ðnÞ from R�1

N ðn�1Þ asfollows:

1.

rNðnÞ ¼ rNðn�1ÞþxNðn�1ÞxðnÞ�xNðn�L�1Þxðn�LÞ

2.

a3ðnÞ ¼R�1N ðn�1ÞrNðnÞ

3.

rNðnÞ ¼ rNðn�1Þþx2ðnÞ�x2ðn�LÞ

4.

aðnÞ ¼ rNðnÞ�aT3ðnÞrNðnÞ

5.

ba3ðnÞ ¼ ½1,�aT3ðnÞ�

T

T

6.

R�1Nþ1ðnÞ ¼ ð

00

0R�1

N ðn�1ÞÞþbaðnÞbaT

ðnÞ=aðnÞ

Finally, the total number of multiplications requiredreaches 2N2

þ6Nþ4.

4. Simulation results

As previously described, the exact inverse matricesrequired by variable order AP algorithms can be recursivelycalculated with a low computational cost. These recursivecalculations give an exact inverse when the initial values ofthe inverses are accurate enough. For this reason, thealgorithm must start with a setup period of N � L iterationsbefore beginning the recursive calculations. Under theseconditions the behavior of the FExVAP algorithm is iden-tical to the VAP algorithm apart from, obviously, its com-putational cost.

In order to test the performance of the proposedFExVAP algorithm compared to the VAP algorithm andthe AP algorithm with N¼10, several simulations havebeen carried out. In addition, simulation results of the E-APand its computationally efficient approach, the FExE-AP,have been also carried out and compared with the algo-rithms previously mentioned. The learning curves ofthe algorithms are calculated by 10 log½e2ðnÞ=d2

ðnÞ�.These curves as well as the number of multiplicationsper iteration required have been calculated and shownin Figs. 2 and 3. The maximum value of the projectionorder was N¼10, and mNup ¼ 1=2 and mNdown ¼ 1=4 for thevariable AP algorithms. Simulations were performed usingthe basic adaptive filtering scheme shown in Fig. 1 whered(n) was chosen as x(n) filtered through a finite impulseresponse filter (P) of 20 randomly chosen coefficients and

Page 5: Fast exact variable order affine projection algorithm

0 200 400 600 800 1000−25

−20

−15

−10

−5

0

5

Iteration number

dB

AP N = 10

VAP & FExVAP

E−AP & FExE−AP

Fig. 2. Learning curves for the AP (N¼10), the VAP, the E-AP, and their

fast exact approaches (FExVAP and FExE-AP) for a stationary environ-

ment during the first 1000 iterations.

0 200 400 600 800 10000

1000

2000

3000

4000

5000

6000

Iteration number

Num

ber o

f mul

tiplic

atio

ns VAP

AP N = 10

FExVAP

0 200 400 600 800 10000

1000

2000

3000

4000

5000

6000

Iteration number

Num

ber o

f mul

tiplic

atio

ns

AP N = 10

E−AP

FExE−AP

Fig. 3. Number of multiplications per iteration for the AP (N¼10), (a)

the VAP and the FExVAP, and (b) the E-AP and the FExE-AP in a

stationary environment during the first 1000 iterations.

M. Ferrer et al. / Signal Processing 92 (2012) 2308–23142312

the input signal x(n) was a zero mean Gaussian randomsignal function of unit variance. The size of the adaptivefilter was fixed to 19 coefficients thus a non-zero residualerror was always assured. Fig. 2 illustrates the algorithmbehavior when the filter used to generate the desiredsignal, d(n), remains invariable. Learning curves depictedin Fig. 2 have been obtained by averaging over 3000independent trials of 10,000 iterations (but only the first1000 iterations are shown).

On the other hand, performance of the algorithmswhen the filter changes during the simulations is shownin Figs. 4 and 5. In this case, learning curves in Fig. 4 havebeen obtained from 3000 independent realizations of30,000 iterations.

Figs. 2–5 show that the VAP and its fast version exhibitthe same learning curves and both outperform the APalgorithm (N¼10) in terms of multiplications required andfinal residual error. Furthermore, the FExVAP algorithmrequires less multiplications than the original VAP, mainlyfor high projection orders. Regarding the E-AP, its efficientversion (FExE-AP) provides the same learning curve and itfalls close to the VAP curve with a slightly poorer perfor-mance in terms of convergence speed and final residualerror. It has to be noted that the AP algorithm used themaximum allowed projection order in these simulations,N¼10, therefore the speed of convergence of the APalgorithm was the maximum available. The VAP algorithmbehaves as fast as the AP algorithm during transientperiods since it is able to dynamically adjust its projectionorder. The comparative of the total number of multiplica-tions of the five algorithms is shown in Table 1. Thesevalues comprises the multiplications needed to carry outthe total number of iterations of each algorithm (10,000iterations for stationary and 30,000 for non-stationaryenvironments). It can be seen that the FExVAP algorithmneeds less multiplications than the VAP as well as theFExE-AP needs less multiplications than the E-AP. Thecomputational cost reduction is more significant whenthe algorithm consumes more time using higher orders.

0 0.5 1 1.5 2 2.5 3x 104

−25

−20

−15

−10

−5

0

dB

Iteration number

AP N = 10 VAP & FExVAP

E−AP & FExE−AP

Fig. 4. Learning curves for the AP (N¼10), the VAP, the E-AP, and their

fast exact approaches (FExVAP and FExE-AP) in a non-stationary

environment.

Page 6: Fast exact variable order affine projection algorithm

0 0.5 1 1.5 2 2.5 3x 104

0

1000

2000

3000

4000

5000

6000

Iteration number

Num

ber o

f mul

tiplic

atio

ns

VAP

FExVAP

AP N = 10

0 0.5 1 1.5 2 2.5 3x 104

0

1000

2000

3000

4000

5000

6000

Iteration number

Num

ber o

f mul

tiplic

atio

ns AP N = 10 E−AP

FExE−AP

Fig. 5. Number of multiplications per iteration for the AP (N¼10), (a)

the VAP and the FExVAP, and (b) the E-AP and the FExE-AP in a non-

stationary environment.

Table 1Comparative total multiplications for stationary (Mult-1) and non-

stationary (Mult-2) environments.

Algorithm Mult-1 Mult-2

AP (N¼10) 29.47X1 5.12X2

VAP 1.30X1 1.88X2

FExVAP X1 X2

E-AP 1.64X1 1.93X2

FExE-AP 1.34X1 1.05X2

0 500 1000 1500 2000−40

−35

−30

−25

−20

−15

−10

−5

0

Iteration number

dB SNR = 30dB

AP N = 10

SNR = 40dB

SNR = 20dB

VAP & FExVAP (with setup stage)

FExVAP (without setup stage)

Fig. 6. Learning curves for the AP (N¼10) and, the VAP and the FExVAP

with: suitable setup stage and inexact initial value of R�1N ðnÞ. Different

SNR values of 20, 30 and 40 dB.

M. Ferrer et al. / Signal Processing 92 (2012) 2308–2314 2313

Finally, a last experiment was conducted to check thesensitivity of the presented method to the matrix inversionin presence of noise. The calculation of the proposedrecursive matrix inversion is accurate when it is based ona known value of the inverse matrix in the previousiteration. Thus, the method is accurate in the presence ofnoise if a suitable setup period is carried out, as describedat the beginning of this section. In this case, the algorithmsthat use the low complexity matrix inversion method(FExVAP and FExE-AP) would behave identically as the

algorithms based on the direct calculation of the inversematrix (E-VAP and AP). Therefore, the proposed algorithmsshould not carry out a suitable setup period and start froman approximate value of the signal inverse matrix in orderto study the method robustness. A meaningful initial valueof the inverse matrix can be found in [28] and it is given byR�1

N ð0Þ ¼ INðs2x Þ�1, where IN represents the N�N identity

matrix and s2x is the variance of the reference signal.

Fig. 6 illustrates the learning curves (from 3000 inde-pendent realizations) of the algorithms with and withoutthe suitable setup stage to calculate an initial inversematrix and compare them with the original AP algorithm.In this case, the number of coefficients (20) of the adaptivefilter coincides with the impulse response length of theunknown system (P). Gaussian noise was added to thedesired signal, d(n), to get experiments with SNR of 20 dB,30 dB and 40 dB. It can be observed in Fig. 6 that theFExVAP algorithm behaves as the original VAP when thesame suitable setup stage is chosen, and both algorithmsoutperform the AP algorithm (N¼10) at steady state. Onthe other hand the algorithm performance worsens whenthe setup stage is not used, although the FExVAP stilloutperforms the AP at steady state. Finally, the algorithm isrobust against noise since different SNR values affectidentically to all tested algorithms. In summary, as it wasexpected, a suitable setup stage is advisable to get opti-mum performance of the FExVAP algorithm.

5. Conclusions

An exact and computationally efficient method tocalculate the inverse signal matrices involved in the APand variable order AP algorithms have been described andvalidated by simulations. Thus a fast exact variable orderAP (FExVAP) algorithm has been developed. This algorithmoutperforms the AP algorithm in terms of computationalcomplexity, convergence speed and final residual error,and outperforms the VAP in number of multiplicationsmainly when the algorithm is working at high projectionorders, which is frequent in non-stationary environments.The developed recursive calculation of the inverse matrices

Page 7: Fast exact variable order affine projection algorithm

M. Ferrer et al. / Signal Processing 92 (2012) 2308–23142314

can be used both when N does not change as when N

increases or decreases one unit between successive algo-rithm iterations.

Appendix A. Matrix inversion lemma

Let C and H be two positive definite N�N matricesthat fulfill, [22,27]

C¼H�1þUW�1UT , ðA:1Þ

where W is a M�M positive definite matrix and U aN�M matrix, then, the inverse of C can be calculated as

C�1¼H�HUðWþUTHUÞ�1UTH: ðA:2Þ

Appendix B. Matrix inversion in block form

It is known that an inverse matrix can be calculatedfrom its parts by

A11 A12

A21 A22

!�1

¼F�1

11 �F�111 A12A�1

22

�A�122 A21F�1

11 A�122 þA�1

22 A21F�111 A12A�1

22

!ðB:1Þ

with

F11 ¼A11�A12A�122 A21: ðB:2Þ

References

[1] K. Ozeki, T. Umeda, An adaptive filtering algorithm using an ortho-gonal projection to an affine subspace and its properties, in: Proceed-ings of the Electronics and Communication in Japan, vol. J67-A,February 1984, pp. 126–132.

[2] A.H. Sayed, Fundamentals of Adaptive Filtering, Wiley, New York,2003.

[3] B. Widrow, S.D. Stearns, Adaptive Signal Processing, Prentice-Hall,Englewood Cliffs, NJ, 1985.

[4] A. Carini, G. Sicuranza, Transient and steady-state analysis offiltered-x affine projection algorithm, IEEE Transactions on SignalProcessing 54 (2) (2006) 665–678.

[5] M. Bouchard, Multichannel affine and fast affine projection algo-rithms for active noise control and acoustic equalization systems,IEEE Transactions on Speech and Audio Processing 11 (1) (2003)54–60.

[6] J. Benesty, P. Duhamel, Y. Granier, A multichannel affine projectionalgorithm with applications to multichannel acoustic echo cancel-lation, IEEE Signal Processing Letters 3 (2) (1996) 35–37.

[7] G. Carayannis, D.G. Manolakis, N. Kalouptsidis, A fast sequentialalgorithm for least-squares filtering and prediction, IEEE Trans-actions on Acoustics, Speech and Signal Processing 31 (1983)1394–1402.

[8] H.-C. Shin, A.H. Sayed, W.-J. Song, Variable step-size NLMS andaffine projection algorithms, IEEE Signal Processing Letters 11 (2)(2004) 132–135.

[9] L. Liu, M. Fukumoto, S. Saiki, S. Zhang, A variable step-size propor-tionate affine projection algorithm for identification of sparseimpulse response, EURASIP Journal on Advances in Signal Processing(2009) 1–10.

[10] L.R. Vega, H. Rey, J. Benesty, A robust variable step-size affineprojection algorithm, Signal Processing 90 (9) (2010) 2806–2810.

[11] K. Mayyas, A variable step-size affine projection algorithm, DigitalSignal Processing 20 (2) (2010) 502–510.

[12] S.-J. Kong, K.-Y. Hwang, W.-J. Song, An affine projection algorithmwith dynamic selection of input vectors, IEEE Signal ProcessingLetters 14 (8) (2007) 529–532.

[13] S.-E. Kim, S.-J. Kong, W.-J. Song, An affine projection algorithm withevolving order, IEEE Signal Processing Letters 16 (11) (2009)937–940.

[14] N.W. Kong, J.W. Shin, P.G. Park, A two-stage affine projectionalgorithm with mean square error matching step sizes, SignalProcessing 91 (11) (2011) 2639–2646.

[15] A. Gonzalez, M. Ferrer, M. de Diego, G. Pi nero, J.J. Lopez, Practicalimplementation of multichannel adaptive filters based on FTF andAP algorithms for active control, International Journal of AdaptiveControl and Signal Processing 19 (2005) 89–105.

[16] S.L. Gay, S. Tavathia, The fast affine projection algorithm, in:Proceedings of the IEEE International Conference on Acoustics,Speech, Signal Processing (ICASSP), vol. 5, Detroit, MI, May 1995,pp. 3023–3026.

[17] M. Tanaka, Y. Kaneda, S. Makino, J. Kojima, Fast projection algo-rithm and its step size control, in: Proceedings of the IEEE Inter-national Conference on Acoustics, Speech, Signal Processing(ICASSP), vol. 2, May 1995, pp. 945–948.

[18] H. Ding, A stable fast affine projection adaptation algorithmsuitable for low-cost processors, in: Proceedings of the IEEE Inter-national Conference on Acoustics, Speech, Signal Processing(ICASSP), vol. 1, August 2000, pp. 360–363.

[19] Y. Zakharov, Low complexity implementation of the affine projec-tion algorithm, IEEE Signal Processing Letters 15 (2008) 557–560.

[20] F. Albu, H.K. Kwan, Fast block exact Gauss-Seidel pseudo affineprojection algorithm, IEEE Electronics Letters 40 (22) (2004)1451–1453.

[21] M.C. Tsakiris, P.A. Naylor, Fast exact affine projection algorithmusing displacement structure theory, in: Proceedings of the DSP2009, July 2009, pp. 69–74.

[22] S. Haykin, Adaptive Filter Theory, fourth ed., Prentice-Hall, Ed.,Upper Saddle River, NJ, 2002.

[23] F. Albu, C. Paleologu, J. Benesty, A variable step size evolutionaryaffine projection algorithm, in: Proceedings of the IEEE Interna-tional Conference on Acoustics, Speech, Signal Processing (ICASSP),Prague, Czech Republic, May 2011, pp. 429–432.

[24] C. Paleologu, J. Benesty, S. Ciochina, A variable step-size affineprojection algorithm designed for acoustic echo cancellation, IEEETransactions on Audio, Speech and Language Processing 16 (8)(2008) 1466–1478.

[25] S.G. Sankaran, A.A. Louis Beex, Convergence behavior of affineprojection algorithms, IEEE Transactions on Signal Processing 45(4) (2000) 1086–1096.

[26] M. Ferrer, M. de Diego, A. Gonzalez, G. Pinero, Efficient implemen-tation of the affine projection algorithms for active noise controlapplication, in: Proceedings of the 12th European Signal ProcessingConference (EUSIPCO), September 2004.

[27] Y. Kaneda, M. Tanaka, J. Kojima, An adaptive algorithm with fastconvergence for multi-input sound control, in: Proceedings of theActive 95, 6–8 July 1995, pp. 993–1004.

[28] A. Gonzalez, M. Ferrer, M. de Diego, L. Fuster, Efficient implementationof matrix recursions in the multichannel affine projection algorithmfor multichannel sound, special session on sound processing formultimedia systems, in: MMSP 2004 IEEE International Workshopon Multimedia Signal Processing, September 2004, Siena, Italy.