Novel Methods For Volterra Filter Representation ...

Post on 06-Apr-2022

3 views 0 download

Transcript of Novel Methods For Volterra Filter Representation ...

- p. 1

Novel Methods For Volterra FilterRepresentation, Identification and

Realization

Ender M. EKSIOGLU, M.Sc.

Advisor: Prof. Ahmet H. KAYRANIstanbul Technical University ? Institute Of Science And Technology

Electronics and Communications Engineering DepartmentFebruary 2005

- p. 2

Main Headings

� Introduction� Volterra Filters� A Novel Volterra filter Representation� Identification Of The Volterra Kernel Vectors Using

Deterministic Input Signals� Simulations for the System Identification Setting� Some Numerical Properties of the Identification Algorithm� Application to the Identification of Nonlinear Communication

channels� A Novel Volterra Filter Realization Method and its

Application to Nonlinear Adaptive Filtering� Concluding Remarks

- p. 2

Main Headings

� Introduction

� Volterra Filters� A Novel Volterra filter Representation� Identification Of The Volterra Kernel Vectors Using

Deterministic Input Signals� Simulations for the System Identification Setting� Some Numerical Properties of the Identification Algorithm� Application to the Identification of Nonlinear Communication

channels� A Novel Volterra Filter Realization Method and its

Application to Nonlinear Adaptive Filtering� Concluding Remarks

- p. 2

Main Headings

� Introduction� Volterra Filters

� A Novel Volterra filter Representation� Identification Of The Volterra Kernel Vectors Using

Deterministic Input Signals� Simulations for the System Identification Setting� Some Numerical Properties of the Identification Algorithm� Application to the Identification of Nonlinear Communication

channels� A Novel Volterra Filter Realization Method and its

Application to Nonlinear Adaptive Filtering� Concluding Remarks

- p. 2

Main Headings

� Introduction� Volterra Filters� A Novel Volterra filter Representation

� Identification Of The Volterra Kernel Vectors UsingDeterministic Input Signals

� Simulations for the System Identification Setting� Some Numerical Properties of the Identification Algorithm� Application to the Identification of Nonlinear Communication

channels� A Novel Volterra Filter Realization Method and its

Application to Nonlinear Adaptive Filtering� Concluding Remarks

- p. 2

Main Headings

� Introduction� Volterra Filters� A Novel Volterra filter Representation� Identification Of The Volterra Kernel Vectors Using

Deterministic Input Signals

� Simulations for the System Identification Setting� Some Numerical Properties of the Identification Algorithm� Application to the Identification of Nonlinear Communication

channels� A Novel Volterra Filter Realization Method and its

Application to Nonlinear Adaptive Filtering� Concluding Remarks

- p. 2

Main Headings

� Introduction� Volterra Filters� A Novel Volterra filter Representation� Identification Of The Volterra Kernel Vectors Using

Deterministic Input Signals� Simulations for the System Identification Setting

� Some Numerical Properties of the Identification Algorithm� Application to the Identification of Nonlinear Communication

channels� A Novel Volterra Filter Realization Method and its

Application to Nonlinear Adaptive Filtering� Concluding Remarks

- p. 2

Main Headings

� Introduction� Volterra Filters� A Novel Volterra filter Representation� Identification Of The Volterra Kernel Vectors Using

Deterministic Input Signals� Simulations for the System Identification Setting� Some Numerical Properties of the Identification Algorithm

� Application to the Identification of Nonlinear Communicationchannels

� A Novel Volterra Filter Realization Method and itsApplication to Nonlinear Adaptive Filtering

� Concluding Remarks

- p. 2

Main Headings

� Introduction� Volterra Filters� A Novel Volterra filter Representation� Identification Of The Volterra Kernel Vectors Using

Deterministic Input Signals� Simulations for the System Identification Setting� Some Numerical Properties of the Identification Algorithm� Application to the Identification of Nonlinear Communication

channels

� A Novel Volterra Filter Realization Method and itsApplication to Nonlinear Adaptive Filtering

� Concluding Remarks

- p. 2

Main Headings

� Introduction� Volterra Filters� A Novel Volterra filter Representation� Identification Of The Volterra Kernel Vectors Using

Deterministic Input Signals� Simulations for the System Identification Setting� Some Numerical Properties of the Identification Algorithm� Application to the Identification of Nonlinear Communication

channels� A Novel Volterra Filter Realization Method and its

Application to Nonlinear Adaptive Filtering

� Concluding Remarks

- p. 2

Main Headings

� Introduction� Volterra Filters� A Novel Volterra filter Representation� Identification Of The Volterra Kernel Vectors Using

Deterministic Input Signals� Simulations for the System Identification Setting� Some Numerical Properties of the Identification Algorithm� Application to the Identification of Nonlinear Communication

channels� A Novel Volterra Filter Realization Method and its

Application to Nonlinear Adaptive Filtering� Concluding Remarks

- p. 3

Introduction

In this dissertation we deal with� discrete-time,� finite-order,� time-invariantVolterra filters.

- p. 4

Introduction

� Firstly, we develop a new representation for the finite-orderVolterra filters. This representation introduces a novelpartitioning of the Volterra kernels.

� Next, we formulate a novel exact identification method forVolterra filters.

� This identification method is based on the novelrepresentation we develop and uses deterministic sequencesconsisting of impulses with distinct levels.

- p. 4

Introduction

� Firstly, we develop a new representation for the finite-orderVolterra filters. This representation introduces a novelpartitioning of the Volterra kernels.

� Next, we formulate a novel exact identification method forVolterra filters.

� This identification method is based on the novelrepresentation we develop and uses deterministic sequencesconsisting of impulses with distinct levels.

- p. 4

Introduction

� Firstly, we develop a new representation for the finite-orderVolterra filters. This representation introduces a novelpartitioning of the Volterra kernels.

� Next, we formulate a novel exact identification method forVolterra filters.

� This identification method is based on the novelrepresentation we develop and uses deterministic sequencesconsisting of impulses with distinct levels.

- p. 5

Introduction

� We know that the unit impulse response is insufficient to fullycharacterize a nonlinear system unlike linear time-invariantsystems.

� The identification method might be considered as a successfulextension of the impulse response of the linear, time-invariantsystems to the realm of nonlinear systems.

� The developed method indeed includes identification usingthe unit impulse response as a subcase when the system underconsideration is a linear system.

- p. 5

Introduction

� We know that the unit impulse response is insufficient to fullycharacterize a nonlinear system unlike linear time-invariantsystems.

� The identification method might be considered as a successfulextension of the impulse response of the linear, time-invariantsystems to the realm of nonlinear systems.

� The developed method indeed includes identification usingthe unit impulse response as a subcase when the system underconsideration is a linear system.

- p. 5

Introduction

� We know that the unit impulse response is insufficient to fullycharacterize a nonlinear system unlike linear time-invariantsystems.

� The identification method might be considered as a successfulextension of the impulse response of the linear, time-invariantsystems to the realm of nonlinear systems.

� The developed method indeed includes identification usingthe unit impulse response as a subcase when the system underconsideration is a linear system.

- p. 6

Introduction

To our best knowledge, this method is the first full-scalegeneralization of the impulse response to the finite orderVolterra type nonlinear systems.

- p. 7

Introduction

� Our identification method is exact.

� Our method calculates each Volterra kernel individually.� Our method calculates directly the Volterra kernels, instead of

calculating first some intermediary representation.� Our method does not introduce and identify any kernels

which are redundant for the regular Volterra filter.� Our method is parsimonious in the number of kernels

identified and in the length of the input sequence utilized toidentify them.

- p. 7

Introduction

� Our identification method is exact.� Our method calculates each Volterra kernel individually.

� Our method calculates directly the Volterra kernels, instead ofcalculating first some intermediary representation.

� Our method does not introduce and identify any kernelswhich are redundant for the regular Volterra filter.

� Our method is parsimonious in the number of kernelsidentified and in the length of the input sequence utilized toidentify them.

- p. 7

Introduction

� Our identification method is exact.� Our method calculates each Volterra kernel individually.� Our method calculates directly the Volterra kernels, instead of

calculating first some intermediary representation.

� Our method does not introduce and identify any kernelswhich are redundant for the regular Volterra filter.

� Our method is parsimonious in the number of kernelsidentified and in the length of the input sequence utilized toidentify them.

- p. 7

Introduction

� Our identification method is exact.� Our method calculates each Volterra kernel individually.� Our method calculates directly the Volterra kernels, instead of

calculating first some intermediary representation.� Our method does not introduce and identify any kernels

which are redundant for the regular Volterra filter.

� Our method is parsimonious in the number of kernelsidentified and in the length of the input sequence utilized toidentify them.

- p. 7

Introduction

� Our identification method is exact.� Our method calculates each Volterra kernel individually.� Our method calculates directly the Volterra kernels, instead of

calculating first some intermediary representation.� Our method does not introduce and identify any kernels

which are redundant for the regular Volterra filter.� Our method is parsimonious in the number of kernels

identified and in the length of the input sequence utilized toidentify them.

- p. 8

Introduction

� We show that the input sequence we develop for identificationis persistently exciting for the Volterra filters underconsideration.

� We further prove the equivalence of our identificationalgorithm to the least squares solution formulation.

- p. 8

Introduction

� We show that the input sequence we develop for identificationis persistently exciting for the Volterra filters underconsideration.

� We further prove the equivalence of our identificationalgorithm to the least squares solution formulation.

- p. 9

Introduction

� We apply the novel identification method to the identificationof the Volterra kernels of nonlinear communication channelsmodelled as third-order Volterra filters.

� We demonstrate with several simulations that theidentification algorithm can produce better parameterestimates than some most recent algorithms in the literature.

- p. 9

Introduction

� We apply the novel identification method to the identificationof the Volterra kernels of nonlinear communication channelsmodelled as third-order Volterra filters.

� We demonstrate with several simulations that theidentification algorithm can produce better parameterestimates than some most recent algorithms in the literature.

- p. 10

Introduction

� A secondary contribution of this dissertation is in the area oforthogonal realizations for Volterra filters.

� We present a novel fully orthogonal structure for therealization of Volterra filters. This structure is based on arecently proposed 2D orthogonal lattice model.

- p. 10

Introduction

� A secondary contribution of this dissertation is in the area oforthogonal realizations for Volterra filters.

� We present a novel fully orthogonal structure for therealization of Volterra filters. This structure is based on arecently proposed 2D orthogonal lattice model.

- p. 11

Volterra Filters

� The use of linear system models has been well establishedwith successful applications.

� However, there are still a large number of problems where onehas to resort to nonlinear system models.

� Linear systems are fully described by their impulse response.� There is no such unified framework for the representation of

nonlinear systems. There are various categories for modellingnonlinear systems.

� In this dissertation we will be dealing with nonlinearpolynomial system models based on the Volterra seriesrepresentation.

- p. 11

Volterra Filters

� The use of linear system models has been well establishedwith successful applications.

� However, there are still a large number of problems where onehas to resort to nonlinear system models.

� Linear systems are fully described by their impulse response.� There is no such unified framework for the representation of

nonlinear systems. There are various categories for modellingnonlinear systems.

� In this dissertation we will be dealing with nonlinearpolynomial system models based on the Volterra seriesrepresentation.

- p. 11

Volterra Filters

� The use of linear system models has been well establishedwith successful applications.

� However, there are still a large number of problems where onehas to resort to nonlinear system models.

� Linear systems are fully described by their impulse response.� There is no such unified framework for the representation of

nonlinear systems. There are various categories for modellingnonlinear systems.

� In this dissertation we will be dealing with nonlinearpolynomial system models based on the Volterra seriesrepresentation.

- p. 11

Volterra Filters

� The use of linear system models has been well establishedwith successful applications.

� However, there are still a large number of problems where onehas to resort to nonlinear system models.

� Linear systems are fully described by their impulse response.

� There is no such unified framework for the representation ofnonlinear systems. There are various categories for modellingnonlinear systems.

� In this dissertation we will be dealing with nonlinearpolynomial system models based on the Volterra seriesrepresentation.

- p. 11

Volterra Filters

� The use of linear system models has been well establishedwith successful applications.

� However, there are still a large number of problems where onehas to resort to nonlinear system models.

� Linear systems are fully described by their impulse response.� There is no such unified framework for the representation of

nonlinear systems. There are various categories for modellingnonlinear systems.

� In this dissertation we will be dealing with nonlinearpolynomial system models based on the Volterra seriesrepresentation.

- p. 11

Volterra Filters

� The use of linear system models has been well establishedwith successful applications.

� However, there are still a large number of problems where onehas to resort to nonlinear system models.

� Linear systems are fully described by their impulse response.� There is no such unified framework for the representation of

nonlinear systems. There are various categories for modellingnonlinear systems.

� In this dissertation we will be dealing with nonlinearpolynomial system models based on the Volterra seriesrepresentation.

- p. 12

Volterra Filters

� Volterra filters based on the Volterra series have been anattractive nonlinear system class due to some desiredproperties.

� Volterra filters bear similarities to the well-developed linearsystem theory.

� Volterra filters can approximate a large class of nonlinearsystems with a finite number of coefficients.

� Many real world processes lend themselves to get modellednaturally by polynomial systems.

- p. 12

Volterra Filters

� Volterra filters based on the Volterra series have been anattractive nonlinear system class due to some desiredproperties.� Volterra filters bear similarities to the well-developed linear

system theory.

� Volterra filters can approximate a large class of nonlinearsystems with a finite number of coefficients.

� Many real world processes lend themselves to get modellednaturally by polynomial systems.

- p. 12

Volterra Filters

� Volterra filters based on the Volterra series have been anattractive nonlinear system class due to some desiredproperties.� Volterra filters bear similarities to the well-developed linear

system theory.� Volterra filters can approximate a large class of nonlinear

systems with a finite number of coefficients.

� Many real world processes lend themselves to get modellednaturally by polynomial systems.

- p. 12

Volterra Filters

� Volterra filters based on the Volterra series have been anattractive nonlinear system class due to some desiredproperties.� Volterra filters bear similarities to the well-developed linear

system theory.� Volterra filters can approximate a large class of nonlinear

systems with a finite number of coefficients.� Many real world processes lend themselves to get modelled

naturally by polynomial systems.

- p. 13

Volterra Filters - Overview

� In this section we will provide an overview of the Volterraseries representation for nonlinear systems.

� The Taylor series expansion with memory is known as theVolterra series. The naming is due to Vito Volterra, the Italianmathematician who introduced this polynomial series.

- p. 13

Volterra Filters - Overview

� In this section we will provide an overview of the Volterraseries representation for nonlinear systems.

� The Taylor series expansion with memory is known as theVolterra series. The naming is due to Vito Volterra, the Italianmathematician who introduced this polynomial series.

- p. 13

Volterra Filters - Overview

� In this section we will provide an overview of the Volterraseries representation for nonlinear systems.

� The Taylor series expansion with memory is known as theVolterra series. The naming is due to Vito Volterra, the Italianmathematician who introduced this polynomial series.

- p. 14

Volterra Filters - Overview

� For a general continuous-time nonlinear system theinput-output relationship is represented by the followinginfinite continuous-time Volterra series integral.

y(t) = b0 +

∫∞

−∞

b1(τ1)x(t − τ1)dτ1

+

∫∞

−∞

∫∞

−∞

b2(τ1, τ2)x(t − τ1)x(t − τ2)dτ1dτ2 + . . .

+

∫∞

−∞

· · ·

∫∞

−∞

bM (τ1, τ2, . . . , τM )x(t − τ1)x(t − τ2) · · ·x(t − τM )dτ1dτ2 · · · dτM

+ . . .

(1)

- p. 14

Volterra Filters - Overview

� For a general continuous-time nonlinear system theinput-output relationship is represented by the followinginfinite continuous-time Volterra series integral.

y(t) = b0 +

∫∞

−∞

b1(τ1)x(t − τ1)dτ1

+

∫∞

−∞

∫∞

−∞

b2(τ1, τ2)x(t − τ1)x(t − τ2)dτ1dτ2 + . . .

+

∫∞

−∞

· · ·

∫∞

−∞

bM (τ1, τ2, . . . , τM )x(t − τ1)x(t − τ2) · · ·x(t − τM )dτ1dτ2 · · · dτM

+ . . .

(1)

- p. 15

Volterra Filters - Overview

� The equivalent discrete-time Volterra series sum is given asfollows.

y(n) =b0 +

∞∑

i1=−∞

b1(i1)x(n − i1)

+

∞∑

i1=−∞

∞∑

i2=−∞

b2(i1, i2)x(n − i1)x(n − i2) + . . .

+

∞∑

i1=−∞

· · ·

∞∑

iM=−∞

bM (i1, i2, . . . , iM )x(n − i1)x(n − i2) · · ·x(n − iM )

+ . . .

(2)

- p. 15

Volterra Filters - Overview

� The equivalent discrete-time Volterra series sum is given asfollows.

y(n) =b0 +

∞∑

i1=−∞

b1(i1)x(n − i1)

+

∞∑

i1=−∞

∞∑

i2=−∞

b2(i1, i2)x(n − i1)x(n − i2) + . . .

+∞∑

i1=−∞

· · ·

∞∑

iM=−∞

bM (i1, i2, . . . , iM )x(n − i1)x(n − i2) · · ·x(n − iM )

+ . . .

(2)

- p. 16

Truncated Volterra Filters

� The truncated or doubly finite Volterra series is obtained byconfining the infinite summations to finite values.

� The truncated Volterra series is suitable for the modelling of awide variety of nonlinearities encountered in real-life systems.

� In this thesis we will be concerned with discrete-time, causal,finite-memory, time-invariant nonlinear systems described bythe discrete-time, truncated Volterra series expansion.

- p. 16

Truncated Volterra Filters

� The truncated or doubly finite Volterra series is obtained byconfining the infinite summations to finite values.

� The truncated Volterra series is suitable for the modelling of awide variety of nonlinearities encountered in real-life systems.

� In this thesis we will be concerned with discrete-time, causal,finite-memory, time-invariant nonlinear systems described bythe discrete-time, truncated Volterra series expansion.

- p. 16

Truncated Volterra Filters

� The truncated or doubly finite Volterra series is obtained byconfining the infinite summations to finite values.

� The truncated Volterra series is suitable for the modelling of awide variety of nonlinearities encountered in real-life systems.

� In this thesis we will be concerned with discrete-time, causal,finite-memory, time-invariant nonlinear systems described bythe discrete-time, truncated Volterra series expansion.

- p. 17

Truncated Volterra Filters

� The truncated Volterra series is given as

y(n) =N∑

i1=0

b1(i1)x(n − i1)

+N∑

i1=0

N∑

i2=i1

b2(i1, i2)x(n − i1)x(n − i2) + . . .

+

N∑

i1=0

N∑

i2=i1

· · ·

N∑

iM=iM−1

bM (i1, i2, . . . , iM )x(n − i1)x(n − i2) · · ·x(n − iM )

(3)

- p. 17

Truncated Volterra Filters

� The truncated Volterra series is given as

y(n) =N∑

i1=0

b1(i1)x(n − i1)

+N∑

i1=0

N∑

i2=i1

b2(i1, i2)x(n − i1)x(n − i2) + . . .

+

N∑

i1=0

N∑

i2=i1

· · ·

N∑

iM=iM−1

bM (i1, i2, . . . , iM )x(n − i1)x(n − i2) · · ·x(n − iM )

(3)

- p. 18

Truncated Volterra Filters

� This representation is called as the triangular Volterrarepresentation in the literature.

� In this thesis we will be calling this representation as theVolterra filter.

� The corresponding kernels will be called simply as theVolterra kernels to avoid confusion.

� We will use the triangular representation as the starting pointfor our studies to develop novel Volterra representations.

- p. 18

Truncated Volterra Filters

� This representation is called as the triangular Volterrarepresentation in the literature.

� In this thesis we will be calling this representation as theVolterra filter.

� The corresponding kernels will be called simply as theVolterra kernels to avoid confusion.

� We will use the triangular representation as the starting pointfor our studies to develop novel Volterra representations.

- p. 18

Truncated Volterra Filters

� This representation is called as the triangular Volterrarepresentation in the literature.

� In this thesis we will be calling this representation as theVolterra filter.

� The corresponding kernels will be called simply as theVolterra kernels to avoid confusion.

� We will use the triangular representation as the starting pointfor our studies to develop novel Volterra representations.

- p. 18

Truncated Volterra Filters

� This representation is called as the triangular Volterrarepresentation in the literature.

� In this thesis we will be calling this representation as theVolterra filter.

� The corresponding kernels will be called simply as theVolterra kernels to avoid confusion.

� We will use the triangular representation as the starting pointfor our studies to develop novel Volterra representations.

- p. 19

Truncated Volterra Filters

� The output y(n) in (3) can be rewritten as below.

y(n) = N [x(n)] =M∑

k=0

yk(n) (4)

in whichyk(n) = Bk[x(n)] (5)

� N [ · ] represents the nonlinear system under consideration.� Bk[ · ] represents a kth-order Volterra subsystem with an

input-output relationship constituting k summations.

- p. 19

Truncated Volterra Filters

� The output y(n) in (3) can be rewritten as below.

y(n) = N [x(n)] =M∑

k=0

yk(n) (4)

in whichyk(n) = Bk[x(n)] (5)

� N [ · ] represents the nonlinear system under consideration.

� Bk[ · ] represents a kth-order Volterra subsystem with aninput-output relationship constituting k summations.

- p. 19

Truncated Volterra Filters

� The output y(n) in (3) can be rewritten as below.

y(n) = N [x(n)] =M∑

k=0

yk(n) (4)

in whichyk(n) = Bk[x(n)] (5)

� N [ · ] represents the nonlinear system under consideration.� Bk[ · ] represents a kth-order Volterra subsystem with an

input-output relationship constituting k summations.

- p. 20

Truncated Volterra Filters

� The input-output relation for each of these subsystems is asindicated below.

yk(n) = Bk[x(n)]

=N∑

i1=0

N∑

i2=i1

· · ·

N∑

ik=ik−1

bk(i1, i2, . . . , ik)x(n − i1)x(n − i2) · · ·x(n − ik)

(6)

- p. 20

Truncated Volterra Filters

� The input-output relation for each of these subsystems is asindicated below.

yk(n) = Bk[x(n)]

=

N∑

i1=0

N∑

i2=i1

· · ·

N∑

ik=ik−1

bk(i1, i2, . . . , ik)x(n − i1)x(n − i2) · · ·x(n − ik)

(6)

- p. 21

Volterra Filter - Figure

x(n) y(n)N [ ]

Figure 1: The input-output relationship of the nonlinear Volterra filter N [ · ].

- p. 22

Volterra Filter - Sum of Subsystems

x(n) +

B1

B2

B3

BM

y(n)

y (n)1

y (n)2

y (n)3

y (n)M

Figure 2: The nonlinear Volterra filter N as a sum of nonlinear subsystems Bk .

- p. 23

Novel Representation

� In this section, the M th-order discrete, causal, time-invariantVolterra system is reformulated, and a new representation forthe Volterra system is given.

� This novel representation will enable us to devise an exactclosed form algorithm, for identifying the Volterra kernelsusing deterministic multilevel sequences, in the comingchapters.

- p. 23

Novel Representation

� In this section, the M th-order discrete, causal, time-invariantVolterra system is reformulated, and a new representation forthe Volterra system is given.

� This novel representation will enable us to devise an exactclosed form algorithm, for identifying the Volterra kernelsusing deterministic multilevel sequences, in the comingchapters.

- p. 23

Novel Representation

� In this section, the M th-order discrete, causal, time-invariantVolterra system is reformulated, and a new representation forthe Volterra system is given.

� This novel representation will enable us to devise an exactclosed form algorithm, for identifying the Volterra kernelsusing deterministic multilevel sequences, in the comingchapters.

- p. 24

Novel Representation

� The M th-order nonlinear system, N [ · ], under considerationcan be modelled by the triangular representation.

y(n) = N [x(n)] =M∑

k=1

yk(n) =M∑

k=1

Bk[x(n)]

=M∑

k=1

N∑

i1=0

N∑

i2=i1

· · ·

N∑

ik=ik−1

bk (i1, i2, . . . , ik) x(n − i1)x(n − i2) · · ·x(n − ik)

︸ ︷︷ ︸

Bk[x(n)]

(7)

- p. 24

Novel Representation

� The M th-order nonlinear system, N [ · ], under considerationcan be modelled by the triangular representation.

y(n) = N [x(n)] =M∑

k=1

yk(n) =M∑

k=1

Bk[x(n)]

=M∑

k=1

N∑

i1=0

N∑

i2=i1

· · ·

N∑

ik=ik−1

bk (i1, i2, . . . , ik) x(n − i1)x(n − i2) · · ·x(n − ik)

︸ ︷︷ ︸

Bk[x(n)]

(7)

- p. 25

Novel Representation

� We introduce a new representation for the Volterra system byrearranging the Volterra kernels.

� We propose that the output y(n) can be considered as the sumof the outputs of M different multivariate cross-term nonlinearsubsystems, H(`), ` = 1, . . . , M .

y(n) = N [x(n)] =M∑

`=1

y(`)(n) (8a)

y(`)(n) = H(`)[x(n)] (8b)

- p. 25

Novel Representation

� We introduce a new representation for the Volterra system byrearranging the Volterra kernels.

� We propose that the output y(n) can be considered as the sumof the outputs of M different multivariate cross-term nonlinearsubsystems, H(`), ` = 1, . . . , M .

y(n) = N [x(n)] =M∑

`=1

y(`)(n) (8a)

y(`)(n) = H(`)[x(n)] (8b)

- p. 25

Novel Representation

� We introduce a new representation for the Volterra system byrearranging the Volterra kernels.

� We propose that the output y(n) can be considered as the sumof the outputs of M different multivariate cross-term nonlinearsubsystems, H(`), ` = 1, . . . , M .

y(n) = N [x(n)] =M∑

`=1

y(`)(n) (8a)

y(`)(n) = H(`)[x(n)] (8b)

- p. 25

Novel Representation

� We introduce a new representation for the Volterra system byrearranging the Volterra kernels.

� We propose that the output y(n) can be considered as the sumof the outputs of M different multivariate cross-term nonlinearsubsystems, H(`), ` = 1, . . . , M .

y(n) = N [x(n)] =M∑

`=1

y(`)(n) (8a)

y(`)(n) = H(`)[x(n)] (8b)

- p. 26

Novel Representation

� The input-output relation for each of the subsystems H(`),` = 1, 2, . . . , M is as indicated below.

H(1)[x(n)] =N∑

i=0

h(1)T

(i) x(1)(n − i)

H(`)[x(n)] =

Q1∑

q1=1

· · ·

Q`−1∑

q`−1=1

N−q`−1∑

i=0

h(`)T

(q1, . . . , q`−1; i) x(`)(q1, . . . , q`−1; n − i)

for 2 6 ` 6 M

(9)

- p. 26

Novel Representation

� The input-output relation for each of the subsystems H(`),` = 1, 2, . . . , M is as indicated below.

H(1)[x(n)] =N∑

i=0

h(1)T

(i) x(1)(n − i)

H(`)[x(n)] =

Q1∑

q1=1

· · ·

Q`−1∑

q`−1=1

N−q`−1∑

i=0

h(`)T

(q1, . . . , q`−1; i) x(`)(q1, . . . , q`−1; n − i)

for 2 6 ` 6 M

(9)

- p. 27

Novel Representation - Figure

x(n) + y(n)

y (n)(1)

y (n)(2)

y (n)(3)

y (n)(M)

H (1)

H (2)

H (3)

H (M)

Figure 3: The novel decomposition for the nonlinear Volterra filter N as a sum of cross-term subsys-tems H(`), ` = 1, . . . , M .

- p. 27

Novel Representation - Figure

x(n) + y(n)

y (n)(1)

y (n)(2)

y (n)(3)

y (n)(M)

H (1)

H (2)

H (3)

H (M)

Figure 3: The novel decomposition for the nonlinear Volterra filter N as a sum of cross-term subsys-tems H(`), ` = 1, . . . , M .

- p. 28

Novel Representation

� The symbol H(`)[·] is called as an `-D cross-term Volterrasubsystem and h(`)(q1, . . . , q`−1; i) is called as an `-D kernelvector.

� The 1-D kernel vectors h(1)(i) and the corresponding inputvectors x(1)(n) can be given in terms of the triangular kernelsand the input signal x(n), respectively.

- p. 28

Novel Representation

� The symbol H(`)[·] is called as an `-D cross-term Volterrasubsystem and h(`)(q1, . . . , q`−1; i) is called as an `-D kernelvector.

� The 1-D kernel vectors h(1)(i) and the corresponding inputvectors x(1)(n) can be given in terms of the triangular kernelsand the input signal x(n), respectively.

- p. 29

Novel Representation

h(1)(i) =

h(1)1 (i)

h(1)2 (i)

...h

(1)M (i)

=

b1(i)

b2(i, i)...

bM (i, . . . , i)

(10)

x(1)(n) =

x(n)

x2(n)...

xM (n)

(11)

- p. 30

Novel Representation

� The `-D input vector in (9) can be expressed in the followingform:

x(`) (q1, . . . , q`−1; n) =

x(`)` (q1, . . . , q`−1; n)

x(`)`+1(q1, . . . , q`−1; n)

...x

(`)M (q1, . . . , q`−1; n)

(12)

in which

x(`)` (q1, . . . , q`−1; n) = x(n) x(n − q1) · · · x(n − q1 − · · · − q`−1) (13)

- p. 30

Novel Representation

� The `-D input vector in (9) can be expressed in the followingform:

x(`) (q1, . . . , q`−1; n) =

x(`)` (q1, . . . , q`−1; n)

x(`)`+1(q1, . . . , q`−1; n)

...x

(`)M (q1, . . . , q`−1; n)

(12)

in which

x(`)` (q1, . . . , q`−1; n) = x(n) x(n − q1) · · · x(n − q1 − · · · − q`−1) (13)

- p. 30

Novel Representation

� The `-D input vector in (9) can be expressed in the followingform:

x(`) (q1, . . . , q`−1; n) =

x(`)` (q1, . . . , q`−1; n)

x(`)`+1(q1, . . . , q`−1; n)

...x

(`)M (q1, . . . , q`−1; n)

(12)

in which

x(`)` (q1, . . . , q`−1; n) = x(n) x(n − q1) · · · x(n − q1 − · · · − q`−1) (13)

- p. 30

Novel Representation

� The `-D input vector in (9) can be expressed in the followingform:

x(`) (q1, . . . , q`−1; n) =

x(`)` (q1, . . . , q`−1; n)

x(`)`+1(q1, . . . , q`−1; n)

...x

(`)M (q1, . . . , q`−1; n)

(12)

in which

x(`)` (q1, . . . , q`−1; n) = x(n) x(n − q1) · · · x(n − q1 − · · · − q`−1) (13)

- p. 31

Novel Representation

x(`)k (q1, . . . , q`−1; n) ≡

[

x(p1,··· ,p`)k (q1, . . . , q`−1; n)

]

σ(p1,...,p`)(14)

� The subinput vector x(`)k (q1, . . . , q`−1; n) for k = `, ` + 1, . . . , M

in (12) consists of all possible input products of degree k.

x(p1,··· ,p`)k (q1, . . . , q`−1; n) = xp1(n)xp2(n−q1) · · · x

p`(n−q1−· · ·−q`−1)

(15)

- p. 31

Novel Representation

x(`)k (q1, . . . , q`−1; n) ≡

[

x(p1,··· ,p`)k (q1, . . . , q`−1; n)

]

σ(p1,...,p`)(14)

� The subinput vector x(`)k (q1, . . . , q`−1; n) for k = `, ` + 1, . . . , M

in (12) consists of all possible input products of degree k.

x(p1,··· ,p`)k (q1, . . . , q`−1; n) = xp1(n)xp2(n−q1) · · · x

p`(n−q1−· · ·−q`−1)

(15)

- p. 31

Novel Representation

x(`)k (q1, . . . , q`−1; n) ≡

[

x(p1,··· ,p`)k (q1, . . . , q`−1; n)

]

σ(p1,...,p`)(14)

� The subinput vector x(`)k (q1, . . . , q`−1; n) for k = `, ` + 1, . . . , M

in (12) consists of all possible input products of degree k.

x(p1,··· ,p`)k (q1, . . . , q`−1; n) = xp1(n)xp2(n−q1) · · · x

p`(n−q1−· · ·−q`−1)

(15)

- p. 32

Novel Representation

Example 2.1: We consider ` = 3 and M = 5. x(3) (q1, q2; n) will begiven as

x(3) (q1, q2; n) =

x(3)3 (q1, q2; n)

x(3)4 (q1, q2; n)

x(3)5 (q1, q2; n)

(16)

Here,x

(3)3 (q1, q2; n) = x(n) x(n − q1) x(n − q1 − q2) (17)

- p. 33

Novel Representation

For x(3)4 (q1, q2; n), the

(32

)= 3 possible combinations can be

written as σ(p1, p2, p3) ={(2, 1, 1) (1, 2, 1), (1, 1, 2)

}.

Hence, x(3)4 (q1, q2; n) is written as

x(3)4 (q1, q2; n) =

x2(n) x(n − q1) x(n − q1 − q2)

x(n) x2(n − q1) x(n − q1 − q2)

x(n) x(n − q1) x2(n − q1 − q2)

(18)

- p. 34

Novel Representation

For x(3)5 (q1, q2; n), all

(42

)= 6 possible combinations can be written

as,

σ(p1, p2, p3) ={(3, 1, 1) (2, 2, 1), (2, 1, 2), (1, 3, 1), (1, 2, 2), (1, 1, 3)

}

x(3)5 (q1, q2; n) =

x3(n) x(n − q1) x(n − q1 − q2)

x2(n) x2(n − q1) x(n − q1 − q2)

x2(n) x(n − q1) x2(n − q1 − q2)

x(n) x3(n − q1) x(n − q1 − q2)

x(n) x2(n − q1) x2(n − q1 − q2)

x(n) x(n − q1) x3(n − q1 − q2)

(19)

- p. 34

Novel Representation

For x(3)5 (q1, q2; n), all

(42

)= 6 possible combinations can be written

as,

σ(p1, p2, p3) ={(3, 1, 1) (2, 2, 1), (2, 1, 2), (1, 3, 1), (1, 2, 2), (1, 1, 3)

}

x(3)5 (q1, q2; n) =

x3(n) x(n − q1) x(n − q1 − q2)

x2(n) x2(n − q1) x(n − q1 − q2)

x2(n) x(n − q1) x2(n − q1 − q2)

x(n) x3(n − q1) x(n − q1 − q2)

x(n) x2(n − q1) x2(n − q1 − q2)

x(n) x(n − q1) x3(n − q1 − q2)

(19)

- p. 34

Novel Representation

For x(3)5 (q1, q2; n), all

(42

)= 6 possible combinations can be written

as,

σ(p1, p2, p3) ={(3, 1, 1) (2, 2, 1), (2, 1, 2), (1, 3, 1), (1, 2, 2), (1, 1, 3)

}

x(3)5 (q1, q2; n) =

x3(n) x(n − q1) x(n − q1 − q2)

x2(n) x2(n − q1) x(n − q1 − q2)

x2(n) x(n − q1) x2(n − q1 − q2)

x(n) x3(n − q1) x(n − q1 − q2)

x(n) x2(n − q1) x2(n − q1 − q2)

x(n) x(n − q1) x3(n − q1 − q2)

(19)

- p. 35

Novel Representation

� The `-D kernel vectors in (9) can be written in terms ofsubkernels as

h(`)(q1, . . . , q`−1; i) =

h(`)` (q1, . . . , q`−1; i)

h(`)`+1(q1, . . . , q`−1; i)

...h

(`)M (q1, . . . , q`−1; i)

(20)

in which

h(`)` (q1, . . . , q`−1; i) = h

(1,1,··· ,1)` (q1, . . . , q`−1; i) (21)

- p. 36

Novel Representation

h(`)k (q1, . . . , q`−1; i) ≡

[

h(p1,p2,···p`)k (q1, . . . , q`−1; i)

]

σ(p1,p2,...,p`) (22)

� Here, the subkernel vector h(`)k (q1, . . . , q`−1; i) corresponds to

the subinput vector x(`)k (q1, . . . , q`−1; n − i) defined in (14).

h(`)k (q1, . . . , q`−1; i) consists of all the Volterra kernels of degree

k with ` cross-terms.

- p. 37

Novel Representation

Example 2.2: Continuing the previous example, let us consider` = 3 and M = 5.

h(3) (q1, q2; i) =

h(3)3 (q1, q2; i)

h(3)4 (q1, q2; i)

h(3)5 (q1, q2; i)

(23)

Here,h

(3)3 (q1, q2; i) = h

(1,1,1)3 (q1, q2; i) (24)

- p. 38

Novel Representation

h(3)4 (q1, q2; i) for ` = 3 can be written as,

h(3)4 (q1, q2; i) =

h(2,1,1)4 (q1, q2; i)

h(1,2,1)4 (q1, q2; i)

h(1,1,2)4 (q1, q2; i)

(25)

- p. 39

Novel Representation

h(3)5 (q1, q2; i) is written as,

h(3)5 (q1, q2; i) =

h(3,1,1)5 (q1, q2; i)

h(2,2,1)5 (q1, q2; i)

h(2,1,2)5 (q1, q2; i)

h(1,3,1)5 (q1, q2; i)

h(1,2,2)5 (q1, q2; i)

h(1,1,3)5 (q1, q2; i)

(26)

- p. 40

Novel Representation

� There exists an equivalent triangular Volterra kernelbk(i1, i2, . . . , ik) as given in (7) for each component of thesubkernel vector h

(`)k (q1, . . . , q`−1; i),

� The relationship between the triangular Volterra kernels andcross-term Volterra kernels is as given below.

h(p1,p2,...,p`)k (q1, . . . , q`−1; i) =

bk(i, . . . , i︸ ︷︷ ︸

p1

, i + q1, . . . , i + q1︸ ︷︷ ︸

p2

, . . . , i + q`−1, . . . , i + q`−1︸ ︷︷ ︸

p`

) (27)

- p. 40

Novel Representation

� There exists an equivalent triangular Volterra kernelbk(i1, i2, . . . , ik) as given in (7) for each component of thesubkernel vector h

(`)k (q1, . . . , q`−1; i),

� The relationship between the triangular Volterra kernels andcross-term Volterra kernels is as given below.

h(p1,p2,...,p`)k (q1, . . . , q`−1; i) =

bk(i, . . . , i︸ ︷︷ ︸

p1

, i + q1, . . . , i + q1︸ ︷︷ ︸

p2

, . . . , i + q`−1, . . . , i + q`−1︸ ︷︷ ︸

p`

) (27)

- p. 40

Novel Representation

� There exists an equivalent triangular Volterra kernelbk(i1, i2, . . . , ik) as given in (7) for each component of thesubkernel vector h

(`)k (q1, . . . , q`−1; i),

� The relationship between the triangular Volterra kernels andcross-term Volterra kernels is as given below.

h(p1,p2,...,p`)k (q1, . . . , q`−1; i) =

bk(i, . . . , i︸ ︷︷ ︸

p1

, i + q1, . . . , i + q1︸ ︷︷ ︸

p2

, . . . , i + q`−1, . . . , i + q`−1︸ ︷︷ ︸

p`

) (27)

- p. 41

Novel Representation

Example 2.3: We continue with Example 2.2. We want to find therepresentations for the kernel vectors, but this time in terms ofthe Volterra kernels. The kernel vectors for ` = 3 and M = 5 cannow be written as

h(3)3 (q1, q2; i) = b3(i, i + q1, i + q2) (28)

h(3)4 (q1, q2; i) =

h(2,1,1)4 (q1, q2; i)

h(1,2,1)4 (q1, q2; i)

h(1,1,2)4 (q1, q2; i)

=

b4(i, i, i + q1, i + q2)

b4(i, i + q1, i + q1, i + q2)

b4(i, i + q1, i + q2, i + q2)

(29)

- p. 42

Novel Representation

h(3)5 (q1, q2; i) =

h(3,1,1)5 (q1, q2; i)

h(2,2,1)5 (q1, q2; i)

h(2,1,2)5 (q1, q2; i)

h(1,3,1)5 (q1, q2; i)

h(1,2,2)5 (q1, q2; i)

h(1,1,3)5 (q1, q2; i)

=

b5(i, i, i, i + q1, i + q2)

b5(i, i, i + q1, i + q1, i + q2)

b5(i, i, i + q1, i + q2, i + q2)

b5(i, i + q1, i + q1, i + q1, i + q2)

b5(i, i + q1, i + q1, i + q2, i + q2)

b5(i, i + q1, i + q2, i + q2, i + q2)

(30)

- p. 43

Novel Representation

Example 2.4: Let us give an example for the novel representationfor a system with M = 3 and N = 2. The usual (triangular)Volterra representation is given in the following form.

y(n) =2∑

i1=0

b1(i1)x(n − i1) +2∑

i1=0

2∑

i2=i1

b2(i1, i2)x(n − i1)x(n − i2)

+

2∑

i1=0

2∑

i2=i1

2∑

i3=i2

b3(i1, i2, i3)x(n − i1)x(n − i2)x(n − i3)

(31)

- p. 44

Novel Representation

y(n) = b1(0)x(n) + b1(1)x(n − 1) + b1(2)x(n − 2) + b2(0, 0)x2(n)

+ b2(1, 1)x2(n − 1) + b2(2, 2)x2(n − 2) + b2(0, 1)x(n)x(n − 1)

+ b2(1, 2)x(n − 1)x(n − 2) + b2(0, 2)x(n)x(n − 2) + b3(0, 0, 0)x3(n)

+ b3(1, 1, 1)x3(n − 1) + b3(2, 2, 2)x3(n − 2) + b3(0, 0, 1)x2(n)x(n − 1)

+ b3(1, 1, 2)x2(n − 1)x(n − 2) + b3(0, 0, 2)x2(n)x(n − 2)

+ b3(0, 1, 1)x(n)x2(n − 1) + b3(1, 2, 2)x(n − 1)x2(n − 2)

+ b3(0, 2, 2)x(n)x2(n − 2) + b3(0, 1, 2)x(n)x(n − 1)x(n − 2)

(32)

- p. 45

Novel Representation

For this Volterra filter, using (8), the novel representation that weintroduced will be given as follows.

y(n) =2∑

i=0

h(1)T

(i) x(1)(n − i) +2∑

q1=1

2−q1∑

i=0

h(2)T

(q1; i)x(2)(q1; n − i)

+

1∑

q1=1

2−q1∑

q2=1

2−q2∑

i=0

h(3)T

(q1, q2; i) x(3)(q1, q2; n − i)

=

2∑

i=0

h(1)T

(i) x(1)(n − i)

+ h(2)T

(1; 0)x(2)(1; n) + h(2)T

(1; 1)x(2)(1; n − 1) + h(2)T

(2; 0)x(2)(2; n)

+ h(3)T

(1, 1; 0) x(3)(1, 1; n)

- p. 46

Novel Representation

Here,

h(1)(i) =

h(1)1 (i)

h(2)2 (i)

h(3)3 (i)

=

b1(i)

b2(i, i)

b3(i, i, i)

, for i = 0, 1, 2

x(1)(n − i) =

x(n − i)

x2(n − i)

x3(n − i)

, for i = 0, 1, 2

- p. 47

Novel Representation

h(2)(1; 0) =

[

h(2)2 (1; 0)

h(2)3 (1; 0)

]

=

h(1,1)2 (1; 0)

h(2,1)3 (1; 0)

h(1,2)3 (1; 0)

=

b2(0, 1)

b3(0, 0, 1)

b3(0, 1, 1)

x(2)(1; n) =

[

x(2)2 (1; n)

x(2)3 (1; n)

]

=

x(2)2 (1; n)

x(2,1)3 (1; n)

x(1,2)3 (1; n)

=

x(n)x(n − 1)

x2(n)x(n − 1)

x(n)x2(n − 1)

- p. 48

Novel Representation

h(2)(1; 1) =

[

h(2)2 (1; 1)

h(2)3 (1; 1)

]

=

h(1,1)2 (1; 1)

h(2,1)3 (1; 1)

h(1,2)3 (1; 1)

=

b2(1, 2)

b3(1, 1, 2)

b3(1, 2, 2)

x(2)(1; n − 1) =

[

x(2)2 (1; n − 1)

x(2)3 (1; n − 1)

]

=

x(2)2 (1; n − 1)

x(2,1)3 (1; n − 1)

x(1,2)3 (1; n − 1)

=

x(n − 1)x(n − 2)

x2(n − 1)x(n − 2)

x(n − 1)x2(n − 2)

- p. 49

Novel Representation

h(2)(2; 0) =

[

h(2)2 (2; 0)

h(2)3 (2; 0)

]

=

h(1,1)2 (2; 0)

h(2,1)3 (2; 0)

h(1,2)3 (2; 0)

=

b2(0, 2)

b3(0, 0, 2)

b3(0, 2, 2)

x(2)(2; n) =

[

x(2)2 (2; n)

x(2)3 (2; n)

]

=

x(2)2 (2; n)

x(2,1)3 (2; n)

x(1,2)3 (2; n)

=

x(n)x(n − 2)

x2(n)x(n − 2)

x(n)x2(n − 2)

- p. 50

Novel Representation

h(3)(1, 1; 0) =[

h(1,1,1)3 (1, 1; 0)

]

=[

b3(0, 1, 2)]

x(3)(1, 1; n) =[

x(3)3 (1, 1; n)

]

=[

x(n)x(n − 1)x(n − 2)]

- p. 51

Novel Representation

� The novel cross-product kernel representation presented inthis section does not increase the number of kernels in theVolterra filter

� We have grouped the Volterra kernels in an novel mannerintroducing the concept of delay-wise dimensionality andcross-term subsystem, rather than using the multiplicationalorder of the Volterra kernels to group the kernels.

� This novel grouping enables us to devise an exact closed formalgorithm for identifying the Volterra kernels usingdeterministic multilevel sequences.

- p. 51

Novel Representation

� The novel cross-product kernel representation presented inthis section does not increase the number of kernels in theVolterra filter

� We have grouped the Volterra kernels in an novel mannerintroducing the concept of delay-wise dimensionality andcross-term subsystem, rather than using the multiplicationalorder of the Volterra kernels to group the kernels.

� This novel grouping enables us to devise an exact closed formalgorithm for identifying the Volterra kernels usingdeterministic multilevel sequences.

- p. 51

Novel Representation

� The novel cross-product kernel representation presented inthis section does not increase the number of kernels in theVolterra filter

� We have grouped the Volterra kernels in an novel mannerintroducing the concept of delay-wise dimensionality andcross-term subsystem, rather than using the multiplicationalorder of the Volterra kernels to group the kernels.

� This novel grouping enables us to devise an exact closed formalgorithm for identifying the Volterra kernels usingdeterministic multilevel sequences.

- p. 52

Novel Identification Algorithm

� Since Wiener introduced the use of the Volterra series fornonlinear modelling in engineering problems, researchershave developed several methods for the estimation of theVolterra kernels.

� The most common class of Volterra system identificationmethods include cross-correlation methods based on randominputs.

- p. 53

Novel Identification Algorithm

� We focus on deterministic excitation sequences for theidentification of nonlinear systems modelled using thetruncated Volterra series representation.

� We proposed a novel partitioning of the Volterra kernels. Thisrepresentation will result in simple closed form solutions forthe kernels when deterministic multilevel input sequences areused.

- p. 54

Novel Identification Algorithm - 1D

� In the novel representation we decomposed the output of theoverall nonlinear system in terms of the outputs of newlydefined cross-term subsystems H(`)[ · ], ` = 1, 2, . . . , M .

� We called the subsystem H(`)[ · ] as the `-D subsystem.� Multilevel single impulses, x(1)(m1; n) = am1δ(n), for

m1 = 1, 2, . . . ,(M1

)can be used to obtain the 1-D kernel vectors

in H(1)[ · ].� Using the novel cross-term representation,the higher

dimensional outputs are zero for these multilevel singleimpulses.

- p. 54

Novel Identification Algorithm - 1D

� In the novel representation we decomposed the output of theoverall nonlinear system in terms of the outputs of newlydefined cross-term subsystems H(`)[ · ], ` = 1, 2, . . . , M .

� We called the subsystem H(`)[ · ] as the `-D subsystem.� Multilevel single impulses, x(1)(m1; n) = am1δ(n), for

m1 = 1, 2, . . . ,(M1

)can be used to obtain the 1-D kernel vectors

in H(1)[ · ].� Using the novel cross-term representation,the higher

dimensional outputs are zero for these multilevel singleimpulses.

- p. 54

Novel Identification Algorithm - 1D

� In the novel representation we decomposed the output of theoverall nonlinear system in terms of the outputs of newlydefined cross-term subsystems H(`)[ · ], ` = 1, 2, . . . , M .

� We called the subsystem H(`)[ · ] as the `-D subsystem.

� Multilevel single impulses, x(1)(m1; n) = am1δ(n), form1 = 1, 2, . . . ,

(M1

)can be used to obtain the 1-D kernel vectors

in H(1)[ · ].� Using the novel cross-term representation,the higher

dimensional outputs are zero for these multilevel singleimpulses.

- p. 54

Novel Identification Algorithm - 1D

� In the novel representation we decomposed the output of theoverall nonlinear system in terms of the outputs of newlydefined cross-term subsystems H(`)[ · ], ` = 1, 2, . . . , M .

� We called the subsystem H(`)[ · ] as the `-D subsystem.� Multilevel single impulses, x(1)(m1; n) = am1δ(n), for

m1 = 1, 2, . . . ,(M1

)can be used to obtain the 1-D kernel vectors

in H(1)[ · ].

� Using the novel cross-term representation,the higherdimensional outputs are zero for these multilevel singleimpulses.

- p. 54

Novel Identification Algorithm - 1D

� In the novel representation we decomposed the output of theoverall nonlinear system in terms of the outputs of newlydefined cross-term subsystems H(`)[ · ], ` = 1, 2, . . . , M .

� We called the subsystem H(`)[ · ] as the `-D subsystem.� Multilevel single impulses, x(1)(m1; n) = am1δ(n), for

m1 = 1, 2, . . . ,(M1

)can be used to obtain the 1-D kernel vectors

in H(1)[ · ].� Using the novel cross-term representation,the higher

dimensional outputs are zero for these multilevel singleimpulses.

- p. 55

Novel Identification Algorithm - 1D

N [am1δ(n)] = H(1)[am1δ(n)]

H(`)[am1δ(n)] = 0, for ` = 2, . . . , M(33)

This relation in (33) is shown pictorially in Fig. 4.

- p. 56

Novel Identification Algorithm - 1D

+ y(n)

H (1)

H (2)

H (3)

H (M)

x (m ;n)(1)

1

y (m ;n)(1)

1

Z0

Z0

Z0

Figure 4: Pictorial description for (33). The output of the nonlinear system N for x(n) = am1δ(n)

is equal to the output of the subsystem H(1).

- p. 57

Novel Identification Algorithm - 1D

� Hence, the output of the M th-order nonlinear system, whenthe input is x(n) = x(1)(m1; n), is given by

y(n) =N[

x(1)(m1; n)]

= y(m1; n)

=H(1)[

x(1)(m1; n)]

= y(1)(m1; n)

=N∑

i=0

h(1)T

(i)u(1)(m1; n − i)

(34)

- p. 58

Novel Identification Algorithm - 1D

u(1)(m1; n)≡

x(1)(m1; n)

x(1)2(m1; n)...

x(1)M

(m1; n)

=

am1

a2m1

...aM

m1

δ(n) (35)

- p. 59

Novel Identification Algorithm - 1D

� Here, m1 = 1, 2, . . . ,(M1

)denotes the ensemble index of the

input sequence. Now we can write all(M1

)outputs in the

ensemble matrix form as follows:

y(1)e (n) = N

[

x(1)e (n)

]

= H(1)[

x(1)e (n)

]

=N∑

i=0

U(1)e (n − i) h(1)(i) (36)

where x(1)e (n),y

(1)e (n) and U

(1)e (n) denote the ensemble input,

ensemble output vectors and the input matrix, respectively.

- p. 60

Novel Identification Algorithm - 1D

x(1)e (n)≡

x(1)(1; n)

x(1)(2; n)...

x(1)(M ; n)

=

a1

a2

...aM

δ(n) (37)

y(1)e (n)≡

N[

x(1)(1; n)]

N[

x(1)(2; n)]

...

N[

x(1)(M ; n)]

=

y(1)(1; n)

y(1)(2; n)...

y(1)(M ; n)

(38)

- p. 61

Novel Identification Algorithm - 1D

U(1)e (n)≡

u(1)T

(1; n)

u(1)T

(2; n)...

u(1)T

(M ; n)

(39)

- p. 62

Novel Identification Algorithm - 1D

� U(1)e (n − i) in (36) can be replaced with

U(1)e (n − i) = U

(1)e δ(n − i), and the matrix U

(1)e is written as

U(1)e =

a1 a21 · · · aM

1

a2 a22 · · · aM

2...

.... . .

...aM a2

M · · · aMM

(40)

- p. 63

Novel Identification Algorithm -1D

� Hence, we get

y(1)e (n) =

N∑

i=0

U(1)e h(1)(i) δ(n − i) = U(1)

e h(1)(n) (41)

� Provided the inverse of the M × M matrix U(1)e exists, the 1-D

kernel vectors can be obtained as

h(1)(n) =[

U(1)e

]−1

y(1)e (n), for n = 0, 1, . . . , N (42)

� Fig. 5 depicts the identification method for 1-D kernels asoutlined in this section and finalized in (42).

- p. 63

Novel Identification Algorithm -1D

� Hence, we get

y(1)e (n) =

N∑

i=0

U(1)e h(1)(i) δ(n − i) = U(1)

e h(1)(n) (41)

� Provided the inverse of the M × M matrix U(1)e exists, the 1-D

kernel vectors can be obtained as

h(1)(n) =[

U(1)e

]−1

y(1)e (n), for n = 0, 1, . . . , N (42)

� Fig. 5 depicts the identification method for 1-D kernels asoutlined in this section and finalized in (42).

- p. 63

Novel Identification Algorithm -1D

� Hence, we get

y(1)e (n) =

N∑

i=0

U(1)e h(1)(i) δ(n − i) = U(1)

e h(1)(n) (41)

� Provided the inverse of the M × M matrix U(1)e exists, the 1-D

kernel vectors can be obtained as

h(1)(n) =[

U(1)e

]−1

y(1)e (n), for n = 0, 1, . . . , N (42)

� Fig. 5 depicts the identification method for 1-D kernels asoutlined in this section and finalized in (42).

- p. 64

Novel Identification Algorithm - 1D

Nx e

(1)(n) y e(1)(n) h(1)(n)

e[ ]U

(1) -1d(n)

Ma

Figure 5: Method used for identification of 1-D Volterra kernels, h(1)(n).

� Note that the linear FIR filter identification via the impulseresponse is covered by this method as the special case M = 1.

- p. 64

Novel Identification Algorithm - 1D

Nx e

(1)(n) y e(1)(n) h(1)(n)

e[ ]U

(1) -1d(n)

Ma

Figure 5: Method used for identification of 1-D Volterra kernels, h(1)(n).

� Note that the linear FIR filter identification via the impulseresponse is covered by this method as the special case M = 1.

- p. 65

Novel Identification Algorithm - 1D

� In order to guarantee the Vandermonde like input matrix U(1)e

in (40) to be nonsingular, the levels of the multilevel ensembleinputs must be chosen to be distinct and nonzero,

i.e., ai 6= 0

and ai 6= aj , ∀ i 6= j.

- p. 65

Novel Identification Algorithm - 1D

� In order to guarantee the Vandermonde like input matrix U(1)e

in (40) to be nonsingular, the levels of the multilevel ensembleinputs must be chosen to be distinct and nonzero, i.e., ai 6= 0

and ai 6= aj , ∀ i 6= j.

- p. 66

Novel Identification Algorithm - 2D

� Now we are interested in computing the 2-D kernel vectors,h(2)(q1; i) for q1 = 1, . . . , N and i = 0, 1, . . . , N − q1.

� We use 2-D ensemble inputs which consist of two impulseswith distinct amplitudes.

� The following sequence consisting of two impulses withdistinct amplitudes will only excite the 2-D kernel vectorh(2)(q1; n − q1) and the 1-D kernel vectors h(1)(n) andh(1)(n − q1).

- p. 66

Novel Identification Algorithm - 2D

� Now we are interested in computing the 2-D kernel vectors,h(2)(q1; i) for q1 = 1, . . . , N and i = 0, 1, . . . , N − q1.

� We use 2-D ensemble inputs which consist of two impulseswith distinct amplitudes.

� The following sequence consisting of two impulses withdistinct amplitudes will only excite the 2-D kernel vectorh(2)(q1; n − q1) and the 1-D kernel vectors h(1)(n) andh(1)(n − q1).

- p. 66

Novel Identification Algorithm - 2D

� Now we are interested in computing the 2-D kernel vectors,h(2)(q1; i) for q1 = 1, . . . , N and i = 0, 1, . . . , N − q1.

� We use 2-D ensemble inputs which consist of two impulseswith distinct amplitudes.

� The following sequence consisting of two impulses withdistinct amplitudes will only excite the 2-D kernel vectorh(2)(q1; n − q1) and the 1-D kernel vectors h(1)(n) andh(1)(n − q1).

- p. 66

Novel Identification Algorithm - 2D

� Now we are interested in computing the 2-D kernel vectors,h(2)(q1; i) for q1 = 1, . . . , N and i = 0, 1, . . . , N − q1.

� We use 2-D ensemble inputs which consist of two impulseswith distinct amplitudes.

� The following sequence consisting of two impulses withdistinct amplitudes will only excite the 2-D kernel vectorh(2)(q1; n − q1) and the 1-D kernel vectors h(1)(n) andh(1)(n − q1).

- p. 67

Novel Identification Algorithm - 2D

x(2)((m1, m2), q1; n

)= x(1)(m1; n) + x(1)(m2; n − q1)

= am1δ(n) + am2δ(n − q1)

for m1 = 1, . . . , M − 1; m2 = m1 + 1, . . . , M

(43)

� It is possible to show that the 2-D input signal in (43) does notexcite the Volterra kernels having more than two cross-terms,i.e.,

H(`)[

x(2)((m1; m2), q1; n

)]

= 0 for ` > 3. (44)

� (44) is depicted in Fig. 6.

- p. 67

Novel Identification Algorithm - 2D

x(2)((m1, m2), q1; n

)= x(1)(m1; n) + x(1)(m2; n − q1)

= am1δ(n) + am2δ(n − q1)

for m1 = 1, . . . , M − 1; m2 = m1 + 1, . . . , M

(43)

� It is possible to show that the 2-D input signal in (43) does notexcite the Volterra kernels having more than two cross-terms,i.e.,

H(`)[

x(2)((m1; m2), q1; n

)]

= 0 for ` > 3. (44)

� (44) is depicted in Fig. 6.

- p. 67

Novel Identification Algorithm - 2D

x(2)((m1, m2), q1; n

)= x(1)(m1; n) + x(1)(m2; n − q1)

= am1δ(n) + am2δ(n − q1)

for m1 = 1, . . . , M − 1; m2 = m1 + 1, . . . , M

(43)

� It is possible to show that the 2-D input signal in (43) does notexcite the Volterra kernels having more than two cross-terms,i.e.,

H(`)[

x(2)((m1; m2), q1; n

)]

= 0 for ` > 3. (44)

� (44) is depicted in Fig. 6.

- p. 68

Novel Identification Algorithm - 2D

H (1)

H (2)

H (3)

H (M)

Z0

y ((m ,m ),q ;n)(2)

1 2 1x ((m ,m ),q ;n)(2)

1 2 1

(2,1)

1 2 1 v ((m ,m ),q ;n)

Z0

+

(2,2)

1 2 1 v ((m ,m ),q ;n)

Figure 6: Pictorial description for (44). The output of the nonlinear system N , for x(n) =

am1δ(n) + am2

δ(n − q1), is equal to the sum of the outputs of subsystems H(1) and H(2).

- p. 69

Novel Identification Algorithm - 2D

� Hence, the output for the input in (43) can be written as

N[

x(2)((m1, m2),q1; n

)]

= y(2)((m1, m2), q1; n

)

=H(1)[

x(2)((m1, m2), q1; n

)]

+ H(2)[

x(2)((m1, m2), q1; n

)]

(45)

� where

H(2)[

x(2)((m1, m2), q1; n

)]

= v(2,2)((m1, m2), q1; n

)(46)

- p. 69

Novel Identification Algorithm - 2D

� Hence, the output for the input in (43) can be written as

N[

x(2)((m1, m2),q1; n

)]

= y(2)((m1, m2), q1; n

)

=H(1)[

x(2)((m1, m2), q1; n

)]

+ H(2)[

x(2)((m1, m2), q1; n

)]

(45)

� where

H(2)[

x(2)((m1, m2), q1; n

)]

= v(2,2)((m1, m2), q1; n

)(46)

- p. 70

Novel Identification Algorithm - 2D

H(1)[

x(2)((m1, m2), q1; n

)]

= v(2,1)((m1, m2), q1; n

)

= v(1,1)(m1; n) + v(1,1)(m2; n − q1)(47)

withv(1,1)(mi; n) = H(1)

[

x(1)(mi; n)]

.

- p. 71

Novel Identification Algorithm - 2D

Here,

v(i,j)((m1, . . . , mi), q1, . . . , qi−1; n

)= H(j)

[

x(i)((m1, . . . , mi), q1, . . . , qi−1; n

)]

(48)The output of the 2-D subsystem can be obtained from (45), (46)and (47) as

v(2,2)((m1, m2), q1; n

)= N

[

x(2)((m1, m2), q1; n

)]

−[

v(1,1)(m1; n) + v(1,1)(m2; n − q1)]

(49)

- p. 72

Novel Identification Algorithm - 2D

� The output of the 2-D subsystem is obtained by subtractingthe appropriate previously computed 1-D outputs from theoverall nonlinear system output.

� We observe the outputs for(M2

)distinct 2-D ensemble inputs

which are given in the matrix form as follows:

x(2)e (q1; n) = T

(M)2,1 x(1)

e (n) + T(M)2,2 x(1)

e (n − q1) (50)

where

x(1)e (n) =

[

a1 a2 · · · aM

]Tδ(n)

- p. 72

Novel Identification Algorithm - 2D

� The output of the 2-D subsystem is obtained by subtractingthe appropriate previously computed 1-D outputs from theoverall nonlinear system output.

� We observe the outputs for(M2

)distinct 2-D ensemble inputs

which are given in the matrix form as follows:

x(2)e (q1; n) = T

(M)2,1 x(1)

e (n) + T(M)2,2 x(1)

e (n − q1) (50)

where

x(1)e (n) =

[

a1 a2 · · · aM

]Tδ(n)

- p. 73

Novel Identification Algorithm - 2D

� The constant T(M)2,1 and T

(M)2,2 matrices with

(M2

)rows and

(M1

)

columns are used to determine the necessary(M2

)

combinations of(M1

)= M ensemble inputs when taken two at

a time.

� These matrices are called as the ensemble input formationmatrices.

- p. 73

Novel Identification Algorithm - 2D

� The constant T(M)2,1 and T

(M)2,2 matrices with

(M2

)rows and

(M1

)

columns are used to determine the necessary(M2

)

combinations of(M1

)= M ensemble inputs when taken two at

a time.� These matrices are called as the ensemble input formation

matrices.

- p. 74

Novel Identification Algorithm - 2D

� Similar to the single-input single-output case in (45), using the2-D ensemble input vector in (50), we can determine thecorresponding output vectors of 1-D and 2-D subsystems,v

(2,1)e (q1; n) = H(1)

[

x(2)e (q1; n)

]

and

v(2,2)e (q1; n) = H(2)

[

x(2)e (q1; n)

]

, respectively.

� The notation v(i,j)e (q1, . . . , qi−1; n) will denote the output of the

j-D subsystem for an i-D input ensemble.

v(i,j)e (q1, . . . , qi−1; n) = H(j)

[

x(i)e (q1, . . . , qi−1; n)

]

(51)

- p. 74

Novel Identification Algorithm - 2D

� Similar to the single-input single-output case in (45), using the2-D ensemble input vector in (50), we can determine thecorresponding output vectors of 1-D and 2-D subsystems,v

(2,1)e (q1; n) = H(1)

[

x(2)e (q1; n)

]

and

v(2,2)e (q1; n) = H(2)

[

x(2)e (q1; n)

]

, respectively.

� The notation v(i,j)e (q1, . . . , qi−1; n) will denote the output of the

j-D subsystem for an i-D input ensemble.

v(i,j)e (q1, . . . , qi−1; n) = H(j)

[

x(i)e (q1, . . . , qi−1; n)

]

(51)

- p. 74

Novel Identification Algorithm - 2D

� Similar to the single-input single-output case in (45), using the2-D ensemble input vector in (50), we can determine thecorresponding output vectors of 1-D and 2-D subsystems,v

(2,1)e (q1; n) = H(1)

[

x(2)e (q1; n)

]

and

v(2,2)e (q1; n) = H(2)

[

x(2)e (q1; n)

]

, respectively.

� The notation v(i,j)e (q1, . . . , qi−1; n) will denote the output of the

j-D subsystem for an i-D input ensemble.

v(i,j)e (q1, . . . , qi−1; n) = H(j)

[

x(i)e (q1, . . . , qi−1; n)

]

(51)

- p. 75

Novel Identification Algorithm - 2D

The response of the 1-D system to the 2-D input ensemble can bedecomposed in terms of the 1-D responses as

v(2,1)e (q1; n) = H(1)

[

T(M)2,1 x(1)

e (n)]

+ H(1)[

T(M)2,2 x(1)

e (n − q1)]

= T(M)2,1 v(1,1)

e (n) + T(M)2,2 v(1,1)

e (n − q1)(52)

- p. 76

Novel Identification Algorithm - 2D

The response of the 2-D subsystem can be obtained bysubtracting the response of the 1-D subsystem from thenonlinear system output y

(2)e (q1; n) = N [x

(2)e (q1; n)]

v(2,2)e (q1; n) = y(2)

e (q1; n) − v(2,1)e (q1; n)

= y(2)e (q1; n) −

[

T(M)2,1 v(1,1)

e (n) + T(M)2,2 v(1,1)

e (n − q1)]

(53)

- p. 77

Novel Identification Algorithm - 2D

It is possible to write the 2-D subsystem equation for theensemble input case,

v(2,2)e (q1; n) =

N−q1∑

i=0

U(2)e (q1; n − i)h(2)(q1; i)

= U(2)e h(2)(q1; n − q1)

(54)

Similar to the 1-D case, U(2)e (q1; n − i) is replaced with

U(2)e δ(n − q1 − i). U

(2)e has the dimension

(M2

(M2

), and it is

written in terms of the amplitude levels, a1, a2, . . . , aM .

- p. 78

Novel Identification Algorithm - 2D

As an example, for M = 3 and M = 4, the U(2)e matrices will be

given respectively as follows.

U(2)e =

a1a2 a1a22 a2

1a2

a1a3 a1a23 a2

1a3

a2a3 a2a23 a2

2a3

U(2)e =

a1a2 a1a22 a2

1a2 a1a32 a2

1a22 a3

1a2

a1a3 a1a23 a2

1a3 a1a33 a2

1a23 a3

1a3

a1a4 a1a24 a3

1a4 a1a34 a2

1a24 a3

1a4

a2a3 a2a23 a2

2a3 a2a33 a2

2a23 a3

2a3

a2a4 a2a24 a2

2a4 a2a34 a2

2a24 a3

2a4

a3a4 a3a24 a2

3a4 a3a34 a2

3a24 a3

3a4

- p. 79

Novel Identification Algorithm - 2D

The 2-D Volterra kernel vectors are obtained as

h(2)(q1; n − q1) =[

U(2)e

]−1

v(2,2)e (q1; n) (55)

for q1 = 1, . . . , N and n = q1, q1 + 1, . . . , N, provided the inverseexists.Fig. 7 depicts the identification method for 2-D kernels asoutlined in this section and in (55).

- p. 80

Novel Identification Algorithm - 2D

N

N

x e(1)(n)

x e(2)(q ;n)

1 y e(2)(q ;n)

1

v e(1,1) (n)

ve(2,2) (q ;n)

1(2)h (q ;n-q )

1 1+T2,2

(M)

T2,1(M)

e[ ]U

(2) -1

Ma x e(1)(n)

+

++

-

T2,1(M) T2,2

(M)

z-q1

z-q1

d(n)

ve(2,1) (q ;n)

1

Figure 7: Method used for identification of 2-D Volterra kernels, h(2)(q1; n).

- p. 81

Novel Identification Algorithm - 3D

� The 3-D kernel vectors h(3)(q1, q2; i) are determined by usingthe 3-D ensemble responses along with the responses of the1-D and 2-D subsystems.

� The following 3-D ensemble input with three distinct impulsesexcites only 1-D, 2-D and 3-D subsystems.

x(3)e (q1, q2; n) = T

(M)3,1 x(1)

e (n)+T(M)3,2 x(1)

e (n−q2)+T(M)3,3 x(1)

e (n−q1−q2)

(56)

- p. 81

Novel Identification Algorithm - 3D

� The 3-D kernel vectors h(3)(q1, q2; i) are determined by usingthe 3-D ensemble responses along with the responses of the1-D and 2-D subsystems.

� The following 3-D ensemble input with three distinct impulsesexcites only 1-D, 2-D and 3-D subsystems.

x(3)e (q1, q2; n) = T

(M)3,1 x(1)

e (n)+T(M)3,2 x(1)

e (n−q2)+T(M)3,3 x(1)

e (n−q1−q2)

(56)

- p. 81

Novel Identification Algorithm - 3D

� The 3-D kernel vectors h(3)(q1, q2; i) are determined by usingthe 3-D ensemble responses along with the responses of the1-D and 2-D subsystems.

� The following 3-D ensemble input with three distinct impulsesexcites only 1-D, 2-D and 3-D subsystems.

x(3)e (q1, q2; n) = T

(M)3,1 x(1)

e (n)+T(M)3,2 x(1)

e (n−q2)+T(M)3,3 x(1)

e (n−q1−q2)

(56)

- p. 81

Novel Identification Algorithm - 3D

� The 3-D kernel vectors h(3)(q1, q2; i) are determined by usingthe 3-D ensemble responses along with the responses of the1-D and 2-D subsystems.

� The following 3-D ensemble input with three distinct impulsesexcites only 1-D, 2-D and 3-D subsystems.

x(3)e (q1, q2; n) = T

(M)3,1 x(1)

e (n)+T(M)3,2 x(1)

e (n−q2)+T(M)3,3 x(1)

e (n−q1−q2)

(56)

- p. 82

Novel Identification Algorithm - 3D

The T(M)3,1 , T(M)

3,2 and T(M)3,3 input formation matrices with

(M3

)

rows and(M1

)columns are used to determine the necessary

(M3

)

combinations of the multilevel impulse functions, when takingtriplets at a time.

- p. 83

Novel Identification Algorithm - 3D

The output of the nonlinear system can be written as the sum ofthe outputs of the exited subsystems,

y(3)e (q1, q2; n) =N

[

x(3)e (q1, q2; n)

]

=3∑

i=1

H(i)[

x(3)e (q1, q2; n)

]

=

3∑

i=1

v(3,i)e (q1, q2; n)

(57)

The input-output relationship in (57) is depicted pictorially inFig. 8.

- p. 83

Novel Identification Algorithm - 3D

The output of the nonlinear system can be written as the sum ofthe outputs of the exited subsystems,

y(3)e (q1, q2; n) =N

[

x(3)e (q1, q2; n)

]

=3∑

i=1

H(i)[

x(3)e (q1, q2; n)

]

=

3∑

i=1

v(3,i)e (q1, q2; n)

(57)

The input-output relationship in (57) is depicted pictorially inFig. 8.

- p. 83

Novel Identification Algorithm - 3D

The output of the nonlinear system can be written as the sum ofthe outputs of the exited subsystems,

y(3)e (q1, q2; n) =N

[

x(3)e (q1, q2; n)

]

=3∑

i=1

H(i)[

x(3)e (q1, q2; n)

]

=3∑

i=1

v(3,i)e (q1, q2; n)

(57)

The input-output relationship in (57) is depicted pictorially inFig. 8.

- p. 83

Novel Identification Algorithm - 3D

The output of the nonlinear system can be written as the sum ofthe outputs of the exited subsystems,

y(3)e (q1, q2; n) =N

[

x(3)e (q1, q2; n)

]

=3∑

i=1

H(i)[

x(3)e (q1, q2; n)

]

=

3∑

i=1

v(3,i)e (q1, q2; n)

(57)

The input-output relationship in (57) is depicted pictorially inFig. 8.

- p. 83

Novel Identification Algorithm - 3D

The output of the nonlinear system can be written as the sum ofthe outputs of the exited subsystems,

y(3)e (q1, q2; n) =N

[

x(3)e (q1, q2; n)

]

=3∑

i=1

H(i)[

x(3)e (q1, q2; n)

]

=

3∑

i=1

v(3,i)e (q1, q2; n)

(57)

The input-output relationship in (57) is depicted pictorially inFig. 8.

- p. 84

Novel Identification Algorithm - 3D

H (1)

H (2)

Z0

+

H (M) Z0

H (3)

H (4)

x e(3)(q ,q ;n)

1 2

v e(3,1) (q ,q ;n)

1 2

v e(3,2) (q ,q ;n)

1 2

v e(3,3) (q ,q ;n)

1 2 y e(3)(q ,q ;n)

1 2

Figure 8: Pictorial description for (57). The output of the nonlinear system N for the 3-D inputensemble x

(3)e

(q1, q2; n), is equal to the sum of the outputs of subsystems H(1), H(2) and H(3).

- p. 85

Novel Identification Algorithm - 3D

� The output of the 1-D subsystem for the 3-D ensemble inputcan be written as a sum of the 1-D ensemble outputs.

v(3,1)e (q1, q2; n) =

(31)∑

j=1

S(M)31,j v

(1,1)e (n − n

(3,1)j ) (58)

� The matrices S(M)31,j for j = 1, 2, 3 are used to pick up the

appropriate 1-D ensemble output values.� We call these matrices as the output pick-up matrices.

- p. 85

Novel Identification Algorithm - 3D

� The output of the 1-D subsystem for the 3-D ensemble inputcan be written as a sum of the 1-D ensemble outputs.

v(3,1)e (q1, q2; n) =

(31)∑

j=1

S(M)31,j v

(1,1)e (n − n

(3,1)j ) (58)

� The matrices S(M)31,j for j = 1, 2, 3 are used to pick up the

appropriate 1-D ensemble output values.� We call these matrices as the output pick-up matrices.

- p. 85

Novel Identification Algorithm - 3D

� The output of the 1-D subsystem for the 3-D ensemble inputcan be written as a sum of the 1-D ensemble outputs.

v(3,1)e (q1, q2; n) =

(31)∑

j=1

S(M)31,j v

(1,1)e (n − n

(3,1)j ) (58)

� The matrices S(M)31,j for j = 1, 2, 3 are used to pick up the

appropriate 1-D ensemble output values.

� We call these matrices as the output pick-up matrices.

- p. 85

Novel Identification Algorithm - 3D

� The output of the 1-D subsystem for the 3-D ensemble inputcan be written as a sum of the 1-D ensemble outputs.

v(3,1)e (q1, q2; n) =

(31)∑

j=1

S(M)31,j v

(1,1)e (n − n

(3,1)j ) (58)

� The matrices S(M)31,j for j = 1, 2, 3 are used to pick up the

appropriate 1-D ensemble output values.� We call these matrices as the output pick-up matrices.

- p. 86

Novel Identification Algorithm - 3D

It is also possible to determine the responses of the 2-Dsubsystem for the 3-D ensemble inputs,

v(3,2)e (q1, q2; n) =

(32)∑

j=1

S(M)32,j v

(2,2)e (q

(3,2)j ; n − n

(3,2)j ) (59)

where the output pick up matrices S(M)32,j for j = 1, 2, 3, which

have(M3

)rows and

(M2

)columns, are used to determine the

appropriate 2-D ensemble output values for x(3)e (q1, q2; n).

- p. 87

Novel Identification Algorithm - 3D

The output of the 3-D subsystem v(3,3)e (q1, q2; n) can be written as

v(3,3)e (q1, q2; n) = U(3)

e h(3)(q1, q2; n − q1 − q2) (60)

The matrix U(3)e has the dimensions

(M3

(M3

).

As an example, for M = 4 and ` = 3, the matrix U(3)e will be given

as

U(3)e =

a1a2a3 a1a2a23 a1a

22a3 a2

1a2a3

a1a2a4 a1a2a24 a1a

22a4 a2

1a2a4

a1a3a4 a1a3a24 a1a

23a4 a2

1a3a4

a2a3a4 a2a3a24 a2a

23a4 a2

2a3a4

- p. 88

Novel Identification Algorithm - 3D

From (57)-(60), we get the desired calculation formula for the 3-Dkernel vectors.

h(3)(q1, q2; n − q1 − q2) =[

U(3)e

]−1

v(3,3)e (q1, q2; n) (61)

where,

v(3,3)e (q1, q2; n) = y(3)

e (q1, q2; n)

( 3∑

j=1

S(M)31,j v

(1,1)e (n − n

(3,1)j ) +

3∑

j=1

S(M)32,j v

(2,2)e (q

(3,2)j ; n − n

(3,2)j )

)

(62)

- p. 89

Novel Identification Algorithm - 3D

Fig. 9 depicts the identification method for 3-D kernels asdescribed by (61) and (62).

N

N

x e(1)(n)

x e(2)(q ;n)

1 y e(2)(q ;n)

1

v e(1,1) (n)

ve(2,2) (q ;n)

1+T2,2

(M)

T2,1(M)

-

z-q1

ve(2,1) (q ;n)

1

Nz-q2

z-q1-q

2

x e(3)(q ,q ;n)

1 2 y e(3)(q ,q ;n)

1 2

v e(3,2) (q ,q ;n)

1 2 v e(3,3) (q ,q ;n)

1 2h(3)

1 2 (q ,q ;n-q -q )

21+T3,1

(M)

T3,2(M)

T3,3(M)

e[ ]U

(3) -1--

Ma x e(1)(n)d(n)

+

++

T2,1(M) T2,1

(M)

z-q1

v e(3,1) (q ,q ;n)

1 2

++

+

S32,1(M)

z-n1(3,2)

S32,2(M)

z-n2(3,2)

S32,3(M)

z-n3(3,2)

++

+

S31,1(M)

z-n1(3,1)

S31,2(M)

z-n2(3,1)

S31,3(M)

z-n3(3,1)

Figure 9: Method used for identification of 3-D Volterra kernels, h(3)(q1, q2; n).

- p. 90

Novel Identification Algorithm - `−D

� Now, we try to identify the `-D kernel vectors by using theresponse of the nonlinear system for the `-D ensemble inputvector and all the previously computed subsystem outputs,v

(k,k)e (q1, . . . , qk−1; n).

� Similar to 1-, 2-, and 3-D ensemble input vectors, the `-D inputensemble vector can be written using the input formationmatrices and the 1-D input ensemble.

x(`)e (q1, . . . , q`−1; n) =

i=1

T(M)`,i x(1)

e (n − n(`)i ) (63)

- p. 90

Novel Identification Algorithm - `−D

� Now, we try to identify the `-D kernel vectors by using theresponse of the nonlinear system for the `-D ensemble inputvector and all the previously computed subsystem outputs,v

(k,k)e (q1, . . . , qk−1; n).

� Similar to 1-, 2-, and 3-D ensemble input vectors, the `-D inputensemble vector can be written using the input formationmatrices and the 1-D input ensemble.

x(`)e (q1, . . . , q`−1; n) =

i=1

T(M)`,i x(1)

e (n − n(`)i ) (63)

- p. 90

Novel Identification Algorithm - `−D

� Now, we try to identify the `-D kernel vectors by using theresponse of the nonlinear system for the `-D ensemble inputvector and all the previously computed subsystem outputs,v

(k,k)e (q1, . . . , qk−1; n).

� Similar to 1-, 2-, and 3-D ensemble input vectors, the `-D inputensemble vector can be written using the input formationmatrices and the 1-D input ensemble.

x(`)e (q1, . . . , q`−1; n) =

i=1

T(M)`,i x(1)

e (n − n(`)i ) (63)

- p. 90

Novel Identification Algorithm - `−D

� Now, we try to identify the `-D kernel vectors by using theresponse of the nonlinear system for the `-D ensemble inputvector and all the previously computed subsystem outputs,v

(k,k)e (q1, . . . , qk−1; n).

� Similar to 1-, 2-, and 3-D ensemble input vectors, the `-D inputensemble vector can be written using the input formationmatrices and the 1-D input ensemble.

x(`)e (q1, . . . , q`−1; n) =

i=1

T(M)`,i x(1)

e (n − n(`)i ) (63)

- p. 91

Novel Identification Algorithm - `−D

� The response of the nonlinear system to the ensemble input in(63) can be written in terms of the outputs of the subsystems,

y(`)e (q1, . . . , q`−1; n) =N

[

x(`)e (q1, . . . , q`−1; n)

]

=∑

k=1

H(k)[

x(`)e (q1, . . . , q`−1; n)

]

=∑

k=1

v(`,k)e (q1, . . . , q`−1; n)

(64)

� Fig. 10 draws a picture of (64), by showing that for an `-Dinput ensemble, the outputs of all subsystems H(k), for k > `,are equal to zero.

- p. 91

Novel Identification Algorithm - `−D

� The response of the nonlinear system to the ensemble input in(63) can be written in terms of the outputs of the subsystems,

y(`)e (q1, . . . , q`−1; n) =N

[

x(`)e (q1, . . . , q`−1; n)

]

=∑

k=1

H(k)[

x(`)e (q1, . . . , q`−1; n)

]

=∑

k=1

v(`,k)e (q1, . . . , q`−1; n)

(64)

� Fig. 10 draws a picture of (64), by showing that for an `-Dinput ensemble, the outputs of all subsystems H(k), for k > `,are equal to zero.

- p. 91

Novel Identification Algorithm - `−D

� The response of the nonlinear system to the ensemble input in(63) can be written in terms of the outputs of the subsystems,

y(`)e (q1, . . . , q`−1; n) =N

[

x(`)e (q1, . . . , q`−1; n)

]

=∑

k=1

H(k)[

x(`)e (q1, . . . , q`−1; n)

]

=∑

k=1

v(`,k)e (q1, . . . , q`−1; n)

(64)

� Fig. 10 draws a picture of (64), by showing that for an `-Dinput ensemble, the outputs of all subsystems H(k), for k > `,are equal to zero.

- p. 91

Novel Identification Algorithm - `−D

� The response of the nonlinear system to the ensemble input in(63) can be written in terms of the outputs of the subsystems,

y(`)e (q1, . . . , q`−1; n) =N

[

x(`)e (q1, . . . , q`−1; n)

]

=∑

k=1

H(k)[

x(`)e (q1, . . . , q`−1; n)

]

=∑

k=1

v(`,k)e (q1, . . . , q`−1; n)

(64)

� Fig. 10 draws a picture of (64), by showing that for an `-Dinput ensemble, the outputs of all subsystems H(k), for k > `,are equal to zero.

- p. 91

Novel Identification Algorithm - `−D

� The response of the nonlinear system to the ensemble input in(63) can be written in terms of the outputs of the subsystems,

y(`)e (q1, . . . , q`−1; n) =N

[

x(`)e (q1, . . . , q`−1; n)

]

=∑

k=1

H(k)[

x(`)e (q1, . . . , q`−1; n)

]

=∑

k=1

v(`,k)e (q1, . . . , q`−1; n)

(64)

� Fig. 10 draws a picture of (64), by showing that for an `-Dinput ensemble, the outputs of all subsystems H(k), for k > `,are equal to zero.

- p. 92

Novel Identification Algorithm - `−D

H (1)

H (2)

+xe

(l)

1 l-1 (q , ,q ;n) ye

(l)

1 l-1 (q , ,q ;n)

ve

(l,1)

1 l-1 (q , ,q ;n)

ve

(l,2)

1 l-1 (q , ,q ;n)

Z0

H (M) Z0

H (l)

H (l+1)

ve

(l,l)

1 l-1 (q , ,q ;n)

Figure 10: Pictorial description for (64). The output of the nonlinear system N for the `-D inputensemble x

(`)e

(q1, . . . , q`−1; n), is equal to the sum of the outputs of subsystems H(1) through

H(`).

- p. 93

Novel Identification Algorithm - `−D

v(`,k)e (q1, . . . , q`−1; n) for k = 1, . . . , ` − 1 can be obtained from the

previous subsystem outputs.

v(`,1)e (q1, . . . , q`−1; n) =

(`

1)∑

j=1S

(M)`1,j v

(1,1)e (n − n

(`,1)j )

...

v(`,k)(q1, . . . , q`−1; n) =(`

k)∑

j=1S

(M)`1,j v

(k,k)e (q

(`,k)j,1 , . . . , q

(`,k)j,k−1; n − n

(`,k)j )

(65)for k = 2, 3, . . . , ` − 1.

- p. 94

Novel Identification Algorithm - `−D

The output of the `-D subsystem can be written as

v(`,`)e (q1, . . . , q`−1; n) = H(`)

[

x(`)e (q1, . . . , q`−1; n)

]

=

N−q`−1∑

i=0

U(`)e (q1, . . . , q`−1; n − i)h(`)(q1, . . . , q`−1; i)

(66)The input matrix U

(`)e (q1, . . . , q`−1; n − i) is replaced with

U(`)e δ(n − q`−1 − i). The matrix U

(`)e has the dimension

(M`

(M`

)

and can be written in terms of the amplitudes a1, a2, . . . , aM .

- p. 95

Novel Identification Algorithm - `−D

As an example, for M = 5 and ` = 4, the matrix U(4)e will be given

as,

U(4)e =

a1a2a3a4 a1a2a3a24 a1a2a

23a4 a1a

22a3a4 a2

1a2a3a4

a1a2a3a5 a1a2a3a25 a1a2a

23a5 a1a

22a3a5 a2

1a2a3a5

a1a2a4a5 a1a2a4a25 a1a2a

24a5 a1a

22a4a5 a2

1a2a4a5

a1a3a4a5 a1a3a4a25 a1a3a

24a5 a1a

23a4a5 a2

1a3a4a5

a2a3a4a5 a2a3a4a25 a2a3a

24a5 a2a

23a4a5 a2

2a3a4a5

- p. 96

Novel Identification Algorithm - `−D

The `-D Volterra kernel vectors can be written in the followingform.

h(`)(q1, . . . , q`−1; n − q`−1) =[

U(`)e

]−1

v(`,`)e (q1, . . . , q`−1; n) (67a)

Here,

v(`,`)e (q1, . . . , q`−1; n) = y(`)

e (q1, . . . , q`−1; n)−

(`

1)∑

j=1

S(M)`1,j v(1,1)

e (n−n(`,1)j )

`−1∑

k=2

(`

k)∑

j=1

S(M)`k,j v

(k,k)e (q

(`,k)j,1 , . . . , q

(`,k)j,k−1; n − n

(`,k)j ) (67b)

- p. 97

Novel Identification Algorithm - `−D

(67) shows that our algorithm can form the estimate for anyVolterra kernel independent from other kernels.Let us define the following output pick-up operators fork = 1, 2, . . . , ` − 1:

S(M)`,k

[

v(k,k)e (q1, . . . , qk−1; n)

]

=

(`

k)∑

j=1

S(M)`k,j v

(k,k)e (q

(`,k)j,1 , . . . , q

(`,k)j,k−1; n−n

(`,k)j )

(68)

- p. 98

Novel Identification Algorithm - `−D

With this definition, (67b) can be rewritten in a more compactform.

v(`,`)e (q1, . . . , q`−1; n) = y(`)

e (q1, . . . , q`−1; n)−

`−1∑

k=1

S(M)`,k

[

v(k,k)e (q1, . . . , qk−1; n)

]

(69)

Fig. 11 depicts the identification of the Volterra kernels of ordersone through M using the proposed algorithm. In this figure theoutput pick-up operators S

(M)`,k [ · ] are utilized to simplify the

picture.

- p. 99

Novel Identification Algorithm - `−D

N

N

N

Nz-q

1

z-q2

z-q1-q

2

z-q1-q

2-q

M-1

x e(1)(n)

x e(2)(q ;n)

1

x e(3)(q ,q ;n)

1 2

SM,M-1(M)

Ma

y e(1)(n)

y e(2)(q ;n)

1

v e(2,1) (q ;n)

1

y e(3)(q ,q ;n)

1 2

v e(3,1) (q ,q ;n)

1 2

+

+ +

+ + +

v e(3,2) (q ,q ;n)

1 2

S3,1(M) S3,2

(M)

ve

(M,1)

1 M-1 (q , ,q ;n) ve

(M,2)

1 M-1 (q , ,q ;n) ve

(M,M-1)

1 M-1 (q , ,q ;n)

ve

(M,M)

1 M-1 (q , ,q ;n)

S2,1(M)

SM,1(M) SM,2

(M)

v e(1,1) (n)

ve(2,2) (q ;n)

1

v e(3,3) (q ,q ;n)

1 2

v e(1,1) (n) ve

(2,2) (q ;n)1 ve

(M-1,M-1)

1 M-2 (q , ,q ;n)

h(M)

1 M-1 1 M-1 (q , ,q ;n-q - -q )

h(1)(n)

(2)h (q ;n-q )1 1

h(3)

1 2 (q ,q ;n-q -q )

21+

+

+

T2,2(M)

T2,1(M)

T3,1(M)

T3,2(M)

T3,3(M)

TM,1(M)

TM,2(M)

TM,M(M)

x e(M)(q ,q , ,q ;n)

1 2 M-1 y e(M)(q ,q , ,q ;n)

1 2 M-1

e[ ]U

(1) -1

e[ ]U

(2) -1

e[ ]U

(3) -1

e[ ]U

(M) -1

+

-

+

-

+

-

+

-

+

-

+

-

d(n)

z-qM-1

Figure 11: Proposed Volterra kernel identification method using multilevel deterministic sequencesas inputs.

- p. 100

Deterministic Sequence Example

� We consider as an example the identification of a Volterra filterwith M = 3 and N = 2.

� The overall deterministic input sequence which should beapplied to identify the kernels of this system is shown in Fig.12.

� This figure depicts all the input ensembles utilized for theidentification of the 1-D, 2-D and 3-D nonlinear subsystems.

- p. 100

Deterministic Sequence Example

� We consider as an example the identification of a Volterra filterwith M = 3 and N = 2.

� The overall deterministic input sequence which should beapplied to identify the kernels of this system is shown in Fig.12.

� This figure depicts all the input ensembles utilized for theidentification of the 1-D, 2-D and 3-D nonlinear subsystems.

- p. 100

Deterministic Sequence Example

� We consider as an example the identification of a Volterra filterwith M = 3 and N = 2.

� The overall deterministic input sequence which should beapplied to identify the kernels of this system is shown in Fig.12.

� This figure depicts all the input ensembles utilized for theidentification of the 1-D, 2-D and 3-D nonlinear subsystems.

- p. 100

Deterministic Sequence Example

� We consider as an example the identification of a Volterra filterwith M = 3 and N = 2.

� The overall deterministic input sequence which should beapplied to identify the kernels of this system is shown in Fig.12.

� This figure depicts all the input ensembles utilized for theidentification of the 1-D, 2-D and 3-D nonlinear subsystems.

- p. 101

Deterministic Sequence Example

x (1;n)(1) x (2;n)

(1) x (3;n)(1) x ((1,2),1;n)(2) x ((1,3),1;n)(2) x ((2,3),1;n)(2)

x ((1,2),2;n)(2) x ((1,3),2;n)(2) x ((2,3),2;n)(2) x ((1,2,3),1,1;n)(2)

0 1 2

3

4 5 6 7 8 9

10

11 12 13 14 15 16

17

18 19 20

21 22

23

24 25 26 27 28 29 30

31

32 33 34

35 36

37

38

39 40 41 n

a1

=1a1

=-1a2

=2a3

a1 a1

a1a1 a1

a2 a2

a2 a2 a2

a3a3

a3 a3 a3

Figure 12: Deterministic multilevel input sequence

- p. 102

Identification Simulations

We present two numerical simulations to illustrate theperformance of our novel identification procedure, where thesetup for the examples are taken from Zhou and Giannakis(1997) and Nowak and Van Veen (1994b).Simulation 1: We simulate a second order Volterra filter withmemory length N = 2.

y(n) =2∑

i1=0

b1(i1)x(n− i1)+2∑

i1=0

2∑

i2=i1

b2(i1, i2)x(n− i1)x(n− i2) (70)

The average input power is unity for both PRMS and ourmultilevel sequence. Independent GWN of power 0.1 is added tothe system output to represent observation noise.

- p. 103

Identification Simulations

In Table 104 the averaged squared error between the estimatedand true kernels and the number of floating point operationsrequired are given for four different input sequence lengths. Thesquared kernel error averaged over independent trials is definedas

error =N−1∑

i1=0

[

b1(i1) − b1(i1; n)]2

+N−1∑

i1=0

N−1∑

i2=i1

[

b2(i1, i2) − b2(i1, i2; n)]2

From the results in Table 104, it is clear that our algorithm usesless operations and gives better results than the PRMS method(Nowak and Van Veen, 1994b).

- p. 104

Identification Simulations

Averaged Squared Error of Estimates for Simulation 1

PRMS of Nowak and Van Veen (1994b) proposed deterministic input sequence

length error flops length error flops

27 7.80 × 10−1 1.22 × 103 15 2.26 ×10−1 0.12 ×103

64 9.93 × 10−2 1.64 ×103 60 5.42 × 10−2 0.29 × 103

125 2.89 × 10−2 2.24 × 103 120 2.84 × 10−2 0.53 × 103

343 6.18 × 10−3 4.12 ×103 330 9.70 × 10−3 1.34 × 103

- p. 105

Identification Simulations

Simulation 2: We simulate a fourth-order Volterra filter withmemory length N = 3 after the example 2 inZhou and Giannakis (1997).

y(n) = x(n)4 + 4.8x(n)3x(n − 1) + 4.8x(n)2x(n − 1)2

− 6x(n)2x(n − 1)x(n − 2) + 14.4x(n)x(n − 1)x(n − 2)x(n − 3)

Average input power is set to 4 and additive independentAGWN observation noise with variance 0.5 is present. The datalength for the PSK input is 4096. Our multilevel sequence is oflength 211 and we send it through 19 times; thus our total inputlength is 4009.

- p. 106

Identification Simulations

Table 107 shows the true values for the non-redundant kernelsand the mean and the standard deviations of the estimates fromour algorithm and the PSK input method of Zhou and Giannakis(1997).There are five nonzero Volterra kernels, and the values are takenfrom (Zhou and Giannakis, 1997). The results for PSK and theresults for our algorithm are averaged over 200 independenttrials.The results for our algorithm are better than those for PSK inputsand our estimates are very accurate despite the high order ofnonlinearity and the presence of noise.

- p. 107

Identification Simulations

Results for Simulation 4.2

(i1, i2, i3, i4) (0, 0, 0, 0) (0, 0, 0, 1) (0, 0, 1, 1) (0, 0, 1, 2) (0, 1, 2, 3)

true b4 (i1, i2, i3, i4) 1.0000 4.8000 4.8000 -6.0000 14.4000

mean of b4 (i1, i2, i3, i4) for PSK 0.9944 4.7927 4.8006 -6.0084 14.3892

std of b4 (i1, i2, i3, i4) for PSK 0.2013 0.1940 0.1745 0.1766 0.1004

mean of b4 (i1, i2, i3, i4) for our alg. 1.0002 4.7999 4.8000 -6.0003 14.4001

std of b4 (i1, i2, i3, i4) for our alg. 0.0024 0.0042 0.0021 0.0038 0.0069

- p. 108

Persistence of Excitation

� We will prove that the multilevel input signals as persistentlyexcite a Volterra filter.

� We start by rewriting the input-output relation of the `-Dcross-term subsystem, H(`).

y(`)(n) = H(`)[x(n)

]=

( N

`−1)∑

p=1

N−sum(qp)∑

i=0

h(`)T

(qp; i) x(`)(qp; n − i),

(71)

� We can reformulate the output in (71)

y(`)(n) = H(`)[x(n)

]=

( N

`−1)∑

p=1

h(`)T

(qp)x(`)n (qp) (72)

- p. 108

Persistence of Excitation

� We will prove that the multilevel input signals as persistentlyexcite a Volterra filter.

� We start by rewriting the input-output relation of the `-Dcross-term subsystem, H(`).

y(`)(n) = H(`)[x(n)

]=

( N

`−1)∑

p=1

N−sum(qp)∑

i=0

h(`)T

(qp; i) x(`)(qp; n − i),

(71)

� We can reformulate the output in (71)

y(`)(n) = H(`)[x(n)

]=

( N

`−1)∑

p=1

h(`)T

(qp)x(`)n (qp) (72)

- p. 108

Persistence of Excitation

� We will prove that the multilevel input signals as persistentlyexcite a Volterra filter.

� We start by rewriting the input-output relation of the `-Dcross-term subsystem, H(`).

y(`)(n) = H(`)[x(n)

]=

( N

`−1)∑

p=1

N−sum(qp)∑

i=0

h(`)T

(qp; i) x(`)(qp; n − i),

(71)

� We can reformulate the output in (71)

y(`)(n) = H(`)[x(n)

]=

( N

`−1)∑

p=1

h(`)T

(qp)x(`)n (qp) (72)

- p. 108

Persistence of Excitation

� We will prove that the multilevel input signals as persistentlyexcite a Volterra filter.

� We start by rewriting the input-output relation of the `-Dcross-term subsystem, H(`).

y(`)(n) = H(`)[x(n)

]=

( N

`−1)∑

p=1

N−sum(qp)∑

i=0

h(`)T

(qp; i) x(`)(qp; n − i),

(71)

� We can reformulate the output in (71)

y(`)(n) = H(`)[x(n)

]=

( N

`−1)∑

p=1

h(`)T

(qp)x(`)n (qp) (72)

- p. 108

Persistence of Excitation

� We will prove that the multilevel input signals as persistentlyexcite a Volterra filter.

� We start by rewriting the input-output relation of the `-Dcross-term subsystem, H(`).

y(`)(n) = H(`)[x(n)

]=

( N

`−1)∑

p=1

N−sum(qp)∑

i=0

h(`)T

(qp; i) x(`)(qp; n − i),

(71)

� We can reformulate the output in (71)

y(`)(n) = H(`)[x(n)

]=

( N

`−1)∑

p=1

h(`)T

(qp)x(`)n (qp) (72)

- p. 108

Persistence of Excitation

� We will prove that the multilevel input signals as persistentlyexcite a Volterra filter.

� We start by rewriting the input-output relation of the `-Dcross-term subsystem, H(`).

y(`)(n) = H(`)[x(n)

]=

( N

`−1)∑

p=1

N−sum(qp)∑

i=0

h(`)T

(qp; i) x(`)(qp; n − i),

(71)

� We can reformulate the output in (71)

y(`)(n) = H(`)[x(n)

]=

( N

`−1)∑

p=1

h(`)T

(qp)x(`)n (qp) (72)

- p. 109

Persistence of Excitation

h(`)(qp) and x(`)n (qp) are column vectors. h(`)(qp) is a

concatenation of the kernel vectors h(`)(qp; i), whereas x(`)n (qp) is

a concatenation of the expanded input vectors x(`)(qp; n − i).

h(`)(qp) =

h(`)(qp; 0)

h(`)(qp; 1)

...

h(`)

(qp; N − sum(qp)

)

x(`)n (qp) =

x(`)(qp; n)

x(`)(qp; n − 1)

...

x(`)

(

qp; n −

(N − sum(qp)

))

(73)

- p. 110

Persistence of Excitation

We can rewrite the linear combination in (72) as a single vectorproduct,

y(`)(n) = H(`)[x(n)

]= h(`)T

x(`)(n) (74)

by rearranging the vectors h(`)(qp) and x(`)n (qp). h(`) and x(`)(n)

are column vectors generated by concatenating respectively thevectors h(`)(qp) and x

(`)n (qp) for all

(N

`−1

)possible delay structures

qp together.

h(`) =

h(`)(q1)

h(`)(q2)

...

h(`)

(q(

N

`−1

))

x(`)(n) =

x(`)n (q1)

x(`)n (q2)

...

x(`)n

(q(

N

`−1

))

(75)

- p. 111

Persistence of Excitation

The length of the vector h(`) is(M`

)(N+1

`

).

Suppose we begin observing the output of the `-D subsystem atsome time n and collect data over an observation period τ > 0.The output for times n through n + τ can be written as a singlevector.

y(`)(n) =[

y(`)(n) y(`)(n + 1) · · · y(`)(n + τ)]H

The output vector is related to the input by

y(`)(n) = X(`)(n)h(`)∗ (76)

- p. 112

Persistence of Excitation

X(`)(n) is the data matrix as defined below.

X(`)(n) =

x(`)(n)H

x(`)(n + 1)H

...x(`)(n + τ)H

(77)

( · )H denotes the Hermitian transpose, and ( · )∗ denotescomplex conjugation.(76) defines the input-output relation of the `-D subsystem H(`)

as a single matrix product.

- p. 113

Persistence of Excitation

� This equation is a pseudo-linear regression equation, sincealthough it is linear with respect to the kernels, the expandedinput matrix X(`)(n) consists of nonlinear products of x(n).

� For this pseudo-linear regression problem we can formulatethe least-squares solution. The optimal least-squares solutionfor the kernel vector in (76) can be written as below.

h(`)∗ = R(`)−1

d (78)

� (76) and (78) are portrayed in Fig. 13.

- p. 113

Persistence of Excitation

� This equation is a pseudo-linear regression equation, sincealthough it is linear with respect to the kernels, the expandedinput matrix X(`)(n) consists of nonlinear products of x(n).

� For this pseudo-linear regression problem we can formulatethe least-squares solution. The optimal least-squares solutionfor the kernel vector in (76) can be written as below.

h(`)∗ = R(`)−1

d (78)

� (76) and (78) are portrayed in Fig. 13.

- p. 113

Persistence of Excitation

� This equation is a pseudo-linear regression equation, sincealthough it is linear with respect to the kernels, the expandedinput matrix X(`)(n) consists of nonlinear products of x(n).

� For this pseudo-linear regression problem we can formulatethe least-squares solution. The optimal least-squares solutionfor the kernel vector in (76) can be written as below.

h(`)∗ = R(`)−1

d (78)

� (76) and (78) are portrayed in Fig. 13.

- p. 113

Persistence of Excitation

� This equation is a pseudo-linear regression equation, sincealthough it is linear with respect to the kernels, the expandedinput matrix X(`)(n) consists of nonlinear products of x(n).

� For this pseudo-linear regression problem we can formulatethe least-squares solution. The optimal least-squares solutionfor the kernel vector in (76) can be written as below.

h(`)∗ = R(`)−1

d (78)

� (76) and (78) are portrayed in Fig. 13.

- p. 114

Persistence of Excitation N

umbe

r of

obse

rvat

ions

t

Number of kernels, K

Ml( )N+1

l( )

t > K

X(l)(n)

=

y(l)(n)h(l)

=K Kx

h(l)^ d

R(l)^

(1/t) X(l)H

X(l)

* *

Figure 13: Graphical description for (76) and (78).

- p. 115

Persistence of Excitation

In (78), R(`)

is the time-averaged autocorrelation matrix for theinput, and d is the time-averaged cross-correlation vectorbetween the input and the output. They are defined as givenbelow.

R(`)= 1

τX(`)H

(n) X(`)(n) =1

τ

n+τ∑

m=n

x(`)(m) x(`)H

(m) (79)

d =1

τX(`)H

(n) y(`)(n) =1

τ

n+τ∑

m=n

x(`)(m) y∗(m) (80)

- p. 116

Persistence of Excitation

� The least squares solution as formulated in (78) and (??) has a

unique solution if the autocorrelation matrix R(`)

is invertible.This brings us to the definition of persistence of excitation.

� Basically signals, for which the time-average autocorrelationmatrix is nonsingular for all times, are said to persistentlyexcite a system.

� We formulate the persistence of excitation condition for the`-D cross-term subsystem H(`).

- p. 116

Persistence of Excitation

� The least squares solution as formulated in (78) and (??) has a

unique solution if the autocorrelation matrix R(`)

is invertible.This brings us to the definition of persistence of excitation.

� Basically signals, for which the time-average autocorrelationmatrix is nonsingular for all times, are said to persistentlyexcite a system.

� We formulate the persistence of excitation condition for the`-D cross-term subsystem H(`).

- p. 116

Persistence of Excitation

� The least squares solution as formulated in (78) and (??) has a

unique solution if the autocorrelation matrix R(`)

is invertible.This brings us to the definition of persistence of excitation.

� Basically signals, for which the time-average autocorrelationmatrix is nonsingular for all times, are said to persistentlyexcite a system.

� We formulate the persistence of excitation condition for the`-D cross-term subsystem H(`).

- p. 117

Persistence of Excitation

� Definition: Persistence of Excitation for H(`)

Let τ > 0 be a fixed observation period of choice. Let λmax andλmin denote the largest and smallest eigenvalues of the

time-average correlation matrix R(`)

, as defined in (79). Ifthere exist positive constants ρ1,ρ2 > 0 such that for every timeinstant k

ρ1 6 λmin 6 λmax 6 ρ2 (81)

then the input signal x(n) is said to be persistently exciting(PE) for the `-D cross-term subsystem H(`).

- p. 118

Persistence of Excitation

� It can be shown that the time-average correlation matrix, R(`)

for our deterministic `-D ensemble input signals x(`)e becomes

a block diagonal matrix.

R(`)

=1

τ

[

R(`)] [

0(M

` )×(M

` )

]

· · ·[

0(M

` )×(M

` )

]

[

0(M

` )×(M

` )

] [

R(`)] . . .

...

.... . . . . .

[

0(M

` )×(M

` )

]

[

0(M

` )×(M

` )

]

· · ·[

0(M

` )×(M

` )

] [

R(`)]

(82)

- p. 118

Persistence of Excitation

� It can be shown that the time-average correlation matrix, R(`)

for our deterministic `-D ensemble input signals x(`)e becomes

a block diagonal matrix.

R(`)

=1

τ

[

R(`)] [

0(M

` )×(M

` )

]

· · ·[

0(M

` )×(M

` )

]

[

0(M

` )×(M

` )

] [

R(`)] . . .

...

.... . . . . .

[

0(M

` )×(M

` )

]

[

0(M

` )×(M

` )

]

· · ·[

0(M

` )×(M

` )

] [

R(`)]

(82)

- p. 119

Persistence of Excitation

� R(`) in (82) has a size of(M`

(M`

).

� R(`) can be written as

R(`) =[

U(`)H

e U(`)e

]T(83)

where U(`)e is the `-D ensemble input matrix.

� Hence, for our deterministic input ensemble the eigenvalues of

the sample correlation matrix R(`)

=1

τX(`)H

(n)X(`)(n) are

equal to the eigenvalues of the matrix R(`) =[

U(`)H

e U(`)e

]T.

- p. 119

Persistence of Excitation

� R(`) in (82) has a size of(M`

(M`

).

� R(`) can be written as

R(`) =[

U(`)H

e U(`)e

]T(83)

where U(`)e is the `-D ensemble input matrix.

� Hence, for our deterministic input ensemble the eigenvalues of

the sample correlation matrix R(`)

=1

τX(`)H

(n)X(`)(n) are

equal to the eigenvalues of the matrix R(`) =[

U(`)H

e U(`)e

]T.

- p. 119

Persistence of Excitation

� R(`) in (82) has a size of(M`

(M`

).

� R(`) can be written as

R(`) =[

U(`)H

e U(`)e

]T(83)

where U(`)e is the `-D ensemble input matrix.

� Hence, for our deterministic input ensemble the eigenvalues of

the sample correlation matrix R(`)

=1

τX(`)H

(n)X(`)(n) are

equal to the eigenvalues of the matrix R(`) =[

U(`)H

e U(`)e

]T.

- p. 120

Persistence of Excitation

� A matrix is said to be positive definite if all of its eigenvaluesare positive. Under this definition, positive definiteness of R(`)

is necessary and sufficient for the persistence of excitationcondition given in (81).

� The Hermitian matrix R(`) =[

U(`)H

e U(`)e

]Twill be a positive

definite matrix if and only if the square matrix U(`)e is

nonsingular.� Therefore, the `-D input sequence is PE for the subsystem H(`)

if and only if the input ensemble matrix U(`)e is nonsingular.

- p. 120

Persistence of Excitation

� A matrix is said to be positive definite if all of its eigenvaluesare positive. Under this definition, positive definiteness of R(`)

is necessary and sufficient for the persistence of excitationcondition given in (81).

� The Hermitian matrix R(`) =[

U(`)H

e U(`)e

]Twill be a positive

definite matrix if and only if the square matrix U(`)e is

nonsingular.

� Therefore, the `-D input sequence is PE for the subsystem H(`)

if and only if the input ensemble matrix U(`)e is nonsingular.

- p. 120

Persistence of Excitation

� A matrix is said to be positive definite if all of its eigenvaluesare positive. Under this definition, positive definiteness of R(`)

is necessary and sufficient for the persistence of excitationcondition given in (81).

� The Hermitian matrix R(`) =[

U(`)H

e U(`)e

]Twill be a positive

definite matrix if and only if the square matrix U(`)e is

nonsingular.� Therefore, the `-D input sequence is PE for the subsystem H(`)

if and only if the input ensemble matrix U(`)e is nonsingular.

- p. 121

Persistence of Excitation

� Hence, the multilevel input sequence will be PE for the overallnonlinear system N if and only if all the input ensemblematrices U

(`)e , 1 ≤ ` ≤ M are nonsingular, for which the

nonsingularity of U(1)e is a sufficient condition.

� The input ensemble is assured to be PE when we choose distinctand nonzero amplitude levels a1, a2, . . . , aM . �

- p. 121

Persistence of Excitation

� Hence, the multilevel input sequence will be PE for the overallnonlinear system N if and only if all the input ensemblematrices U

(`)e , 1 ≤ ` ≤ M are nonsingular, for which the

nonsingularity of U(1)e is a sufficient condition.

� The input ensemble is assured to be PE when we choose distinctand nonzero amplitude levels a1, a2, . . . , aM . �

- p. 122

Least Squares Solution

� In the case of a general nonspecific input sequence, theleast-squares solution, if it exists, requires the calculation of

the inverse of an autocorrelation matrix R(`)

, of size(M`

)(N+1

`

(M`

)(N+1

`

).

� However, for our specific `-D input sequences, theautocorrelation matrix greatly simplifies and attains the verysparse form of a block diagonal matrix.

� It is only necessary to calculate the inverse of the matrix R(`) ofsize

(M`

(M`

).

� Hence, our identification algorithm provides a very specialinput sequence for which the least squares solution alwaysexists and is much easier to calculate than the case of generalinputs.

- p. 122

Least Squares Solution

� In the case of a general nonspecific input sequence, theleast-squares solution, if it exists, requires the calculation of

the inverse of an autocorrelation matrix R(`)

, of size(M`

)(N+1

`

(M`

)(N+1

`

).

� However, for our specific `-D input sequences, theautocorrelation matrix greatly simplifies and attains the verysparse form of a block diagonal matrix.

� It is only necessary to calculate the inverse of the matrix R(`) ofsize

(M`

(M`

).

� Hence, our identification algorithm provides a very specialinput sequence for which the least squares solution alwaysexists and is much easier to calculate than the case of generalinputs.

- p. 122

Least Squares Solution

� In the case of a general nonspecific input sequence, theleast-squares solution, if it exists, requires the calculation of

the inverse of an autocorrelation matrix R(`)

, of size(M`

)(N+1

`

(M`

)(N+1

`

).

� However, for our specific `-D input sequences, theautocorrelation matrix greatly simplifies and attains the verysparse form of a block diagonal matrix.

� It is only necessary to calculate the inverse of the matrix R(`) ofsize

(M`

(M`

).

� Hence, our identification algorithm provides a very specialinput sequence for which the least squares solution alwaysexists and is much easier to calculate than the case of generalinputs.

- p. 122

Least Squares Solution

� In the case of a general nonspecific input sequence, theleast-squares solution, if it exists, requires the calculation of

the inverse of an autocorrelation matrix R(`)

, of size(M`

)(N+1

`

(M`

)(N+1

`

).

� However, for our specific `-D input sequences, theautocorrelation matrix greatly simplifies and attains the verysparse form of a block diagonal matrix.

� It is only necessary to calculate the inverse of the matrix R(`) ofsize

(M`

(M`

).

� Hence, our identification algorithm provides a very specialinput sequence for which the least squares solution alwaysexists and is much easier to calculate than the case of generalinputs.

- p. 123

Communication Channel Identification

� Nonlinear channel identification is important in mitigating theeffects of nonlinear distortions and for the equalization of thenonlinear communication channels

� The performance of the efforts for compensation ofnonlinearities and channel equalization are highly dependenton the accuracy of the nonlinear channel estimate.

� Hence, nonlinear channel identification has been a subject ofsignificance.

- p. 123

Communication Channel Identification

� Nonlinear channel identification is important in mitigating theeffects of nonlinear distortions and for the equalization of thenonlinear communication channels

� The performance of the efforts for compensation ofnonlinearities and channel equalization are highly dependenton the accuracy of the nonlinear channel estimate.

� Hence, nonlinear channel identification has been a subject ofsignificance.

- p. 123

Communication Channel Identification

� Nonlinear channel identification is important in mitigating theeffects of nonlinear distortions and for the equalization of thenonlinear communication channels

� The performance of the efforts for compensation ofnonlinearities and channel equalization are highly dependenton the accuracy of the nonlinear channel estimate.

� Hence, nonlinear channel identification has been a subject ofsignificance.

- p. 124

Communication Channel Identification

� We have applied the novel identification method to theidentification of communication channels with nonlinearities.

� We have modelled the channel as a third-order discreteVolterra filter and the Volterra kernels are measured usingdeterministic input sequences and the corresponding channeloutputs.

� We present two numerical examples to illustrate theperformance of our novel identification procedure.

- p. 124

Communication Channel Identification

� We have applied the novel identification method to theidentification of communication channels with nonlinearities.

� We have modelled the channel as a third-order discreteVolterra filter and the Volterra kernels are measured usingdeterministic input sequences and the corresponding channeloutputs.

� We present two numerical examples to illustrate theperformance of our novel identification procedure.

- p. 124

Communication Channel Identification

� We have applied the novel identification method to theidentification of communication channels with nonlinearities.

� We have modelled the channel as a third-order discreteVolterra filter and the Volterra kernels are measured usingdeterministic input sequences and the corresponding channeloutputs.

� We present two numerical examples to illustrate theperformance of our novel identification procedure.

- p. 125

Communication Channel Identification

Simulation 5.1: We simulate a linear-quadratic-cubic Volterrachannel with memory length N = 2.

y(n) = x(n) + 0.5x(n − 1) − 0.8x(n − 2) + x(n)2+

0.6x(n)x(n − 1) − 0.3x(n − 1)2 + x(n)3 + 1.2x(n)2x(n − 1)

+ 0.8x(n)x(n − 1)2 − 0.5x(n − 1)3 + x(n)x(n − 1)x(n − 2)

We use QPSK modulated signals as the input, where thedeterministic input levels are chosen from the set4ej(2πk/4+π/4), k = 0, 1, 2, 3.

- p. 126

Communication Channel Identification

Additive independent GWN observation noise with unitvariance is present.Table 1 shows the true values for the non-redundant kernels andthe mean and the standard deviations of the estimates from ouralgorithm and the PSK input method of (Zhou and Giannakis,1997).

- p. 127

Communication Channel Identification

Table 1: Results for Simulation 5.1

(i1) (0) (1) (2)

true b1 (i1) 1.0000 0.5000 -0.8000

mean of b1 (i1) for Zhou and Giannakis (1997) 0.9955 0.4886 -0.8150

mean of b1 (i1) for our method 1.0045 0.5108 -0.8164

std of b1 (i1) for Zhou and Giannakis (1997) 0.5195 0.3758 0.3326

std of b1 (i1) for our method 0.1219 0.1266 0.1237

(i1, i2) (0, 0) (0, 1) (1, 1)

true b2 (i1, i2) 1.0000 0.6000 -0.3000

mean of b2 (i1, i2) for Zhou and Giannakis (1997) 1.0035 0.6026 -0.2958

mean of b2 (i1, i2) for our method 1.0009 0.5984 -0.3035

std of b2 (i1, i2) for Zhou and Giannakis (1997) 0.1148 0.1335 0.0788

std of b2 (i1, i2) for our method 0.0014 0.0764 0.0050

- p. 128

Communication Channel Identification

(i1, i2, i3) (0, 0, 0) (0, 0, 1) (0, 1, 1) (1, 1, 1) (0, 1, 2)

true b3 (i1, i2, i3) 1.0000 1.2000 0.8000 -0.5000 0.6000

mean of b3 (i1, i2, i3) for

Zhou and Giannakis (1997)0.99995 1.2005 0.7993 -0.5009 0.5997

mean of b3 (i1, i2, i3) for our method 1.0002 1.2006 0.8019 -0.4997 0.6036

std of b3 (i1, i2, i3) for

Zhou and Giannakis (1997)0.0164 0.0235 0.0215 0.0281 0.0204

std of b3 (i1, i2, i3) for our method 0.0077 0.0196 0.0185 0.0082 0.0285

- p. 129

Bandpass Communication Channel

The bandpass Volterra series is employed in the basebandrepresentation of narrow-band communication channels.The bandpass Volterra filter including nonlinearities up to thirdorder is given as:

y(n) =

N∑

i1=0

b1(i1)x(n − i1)+

N∑

i1=0

N∑

i2=0

N∑

i3=i2

b3(i1, i2, i3)x∗(n − i1)x(n − i2)x(n − i3)

(84)

- p. 130

Bandpass Communication Channel

We can easily modify the identification method we developed forthe regular Volterra filter to the bandpass Volterra channel case.Simulation 2: We simulate a linear-cubic “bandpass” Volterrafilter, where the input-output relationship for the bandpassVolterra filter is given in (84).

y(n) = x(n) + (0.5 + 0.5j)x(n − 1) − 0.6x(n − 2) + x(n)∗x(n − 1)2

+ (0.4 + 0.4j)x(n − 1)∗x(n)2 − 0.4x(n − 1)∗x(n − 2)2

+ 0.6x(n − 2)∗x(n − 1)2 + (0.6 + 0.7j)x(n − 2)∗x(n)2

+ 0.5x(n)∗x(n − 2)2 + (0.3 + 0.4j)x(n)∗x(n − 1)x(n − 2)

- p. 131

Bandpass Communication Channel

The channel model we simulate has a memory length of N = 2.We use QPSK modulated signals as the input, where we choosethe input levels for our deterministic sequence from the set2ej(2πk/4+π/4), k = 0, 1, 2, 3.Additive independent GWN observation noise with variance 0.5is present.We also realized the method for bandpass Volterra kernelidentification as given in Cheng and Powers (2001) for thesimulation setup given above.

- p. 132

Bandpass Communication Channel

Table 2 shows the true values for the non-redundant bandpassVolterra kernels and the mean and the standard deviations of theestimates from our algorithm and the method detailed in(Cheng and Powers, 2001).The results for our algorithm are better than those for themethod of Cheng and Powers (2001) even though our methodemployed an input sequence of shorter length.

- p. 133

Bandpass Communication Channel

Table 2: Results for Simulation 5.2

(i1) (0) (1) (2)

true b1 (i1) 1.0000 0.5000+0.5000j -0.6000

mean of b1 (i1) for

Cheng and Powers (2001)0.9995+0.0065j 0.5091+0.4971j 0.5967+0.0076j

mean of b1 (i1) for our method 0.9999 + 0.0005j 0.5003 + 0.4994j -0.5999+0.0006j

std of b1 (i1) for

Cheng and Powers (2001)0.1131 0.1086 0.1050

std of b1 (i1) for our method 0.0183 0.0193 0.0184

- p. 134

Bandpass Communication Channel

(i1, i2, i3) (0, 1, 1) (1, 0, 0) (1, 2, 2) (2, 1, 1)

true b3 (i1, i2, i3) 1.00 0.40+0.40j -0.40 0.60

mean of b3 (i1, i2, i3) for

Cheng and Powers (2001)0.9995+0.0024j 0.4017+0.3998j -0.3993+0.0016j 0.5987-0.0004j

mean of b3 (i1, i2, i3)

for our method1.0000-0.0007j 0.3995+0.3999j -0.4000+0.0004j 0.6002+0.0001j

std of b3 (i1, i2, i3) for

Cheng and Powers (2001)0.0270 0.0320 0.0297 0.0259

std of b3 (i1, i2, i3)

for our method0.0166 0.0144 0.0160 0.0139

- p. 135

Bandpass Communication Channel

(i1, i2, i3) (2, 0, 0) (0, 2, 2) (0, 1, 2)

true b3 (i1, i2, i3) 0.60+0.70j 0.50 0.30+0.40j

mean of b3 (i1, i2, i3) for

Cheng and Powers (2001)0.5995 + 0.6995j 0.4996 - 0.0010j 0.3002 + 0.4003j

mean of b3 (i1, i2, i3)

for our method0.6005 + 0.7002j 0.5000 + 0.0007j 0.2993 + 0.3993j

std of b3 (i1, i2, i3) for

Cheng and Powers (2001)0.0243 0.0263 0.0260

std of b3 (i1, i2, i3)

for our method0.0138 0.0160 0.0217

- p. 136

Lattice Realization for Volterra

� We present a novel method for realizing nonlinear Volterrafilters using the reduced-order 2-D orthogonal lattice filterstructure.

� This method provides an orthogonal structure for arbitraryinput signals and is capable of handling arbitrary lengths ofmemory for the system model.

� A recursive least squares adaptive-second order Volterra filterbased on this structure is included to verify the performance.

- p. 136

Lattice Realization for Volterra

� We present a novel method for realizing nonlinear Volterrafilters using the reduced-order 2-D orthogonal lattice filterstructure.

� This method provides an orthogonal structure for arbitraryinput signals and is capable of handling arbitrary lengths ofmemory for the system model.

� A recursive least squares adaptive-second order Volterra filterbased on this structure is included to verify the performance.

- p. 136

Lattice Realization for Volterra

� We present a novel method for realizing nonlinear Volterrafilters using the reduced-order 2-D orthogonal lattice filterstructure.

� This method provides an orthogonal structure for arbitraryinput signals and is capable of handling arbitrary lengths ofmemory for the system model.

� A recursive least squares adaptive-second order Volterra filterbased on this structure is included to verify the performance.

- p. 136

Lattice Realization for Volterra

� We present a novel method for realizing nonlinear Volterrafilters using the reduced-order 2-D orthogonal lattice filterstructure.

� This method provides an orthogonal structure for arbitraryinput signals and is capable of handling arbitrary lengths ofmemory for the system model.

� A recursive least squares adaptive-second order Volterra filterbased on this structure is included to verify the performance.

- p. 137

Lattice Realization for Volterra

Consider the nonlinear system with the input-output relationbased on the truncated second-order Volterra series expansion.

d(n) =N−1∑

i1=0

b1 (i1; n) x(n − i1)+N−1∑

i1=0

N−1∑

i2=i1

b2 (i1, i2; n) x(n − i1)x(n − i2)

(85)

It is possible to describe the input-output relationship given in(85) as a pseudo-linear regression in the form of a vector product.

d(n) = XT2 (n)B2(n) (86)

- p. 138

Lattice Realization for Volterra

X2(n) will be of the form as described in the following equation.

X2(n) =

x(n)

x(n − 1)...

x(n − N + 1)

x(n)2

x(n)x(n − 1)...

x(n − N + 1)2

(87)

- p. 139

Lattice Realization for Volterra

B2(n) is a vector which contains all the Volterra kernels asrequired in (85). B2(n) will be given as,

B2(n) =

b1(0; n)

b1(1; n)...

b1(N − 1; n)

b2(0, 0; n)

b2(0, 1; n)...

b2(N − 1, N − 1; n)

(88)

- p. 140

Lattice Realization for Volterra

� The direct form realization as indicated by (85) and (86) cansuffer from ill conditioning, especially in nonlinear adaptivefiltering applications.

� In the literature, attempts have been made to find numericallyrobust alternative realization methods for the truncatedVolterra filter (Lee and Mathews, 1993; Mathews, 1991;Ozden et al., 1996a; Syed and Mathews, 1994).

� Both methods in Syed and Mathews (1994) and Ozden et al.(1996a) are based on the multichannel lattice structure asdeveloped in Ling and Proakis (1984).

- p. 140

Lattice Realization for Volterra

� The direct form realization as indicated by (85) and (86) cansuffer from ill conditioning, especially in nonlinear adaptivefiltering applications.

� In the literature, attempts have been made to find numericallyrobust alternative realization methods for the truncatedVolterra filter (Lee and Mathews, 1993; Mathews, 1991;Ozden et al., 1996a; Syed and Mathews, 1994).

� Both methods in Syed and Mathews (1994) and Ozden et al.(1996a) are based on the multichannel lattice structure asdeveloped in Ling and Proakis (1984).

- p. 140

Lattice Realization for Volterra

� The direct form realization as indicated by (85) and (86) cansuffer from ill conditioning, especially in nonlinear adaptivefiltering applications.

� In the literature, attempts have been made to find numericallyrobust alternative realization methods for the truncatedVolterra filter (Lee and Mathews, 1993; Mathews, 1991;Ozden et al., 1996a; Syed and Mathews, 1994).

� Both methods in Syed and Mathews (1994) and Ozden et al.(1996a) are based on the multichannel lattice structure asdeveloped in Ling and Proakis (1984).

- p. 141

Lattice Realization for Volterra

� These methods convert the input signal vector as given in (87)into a multichannel signal and apply orthogonalization ontothe multichannel signal while calculating the nonlinear output.

� We restructure the expanded input signal vector X2(n) into a2D array rather than using a multichannel setup.

� It is possible to realize the Volterra system as a joint-processestimator with a lattice-ladder structure instead of the directform realization as in (85).

- p. 141

Lattice Realization for Volterra

� These methods convert the input signal vector as given in (87)into a multichannel signal and apply orthogonalization ontothe multichannel signal while calculating the nonlinear output.

� We restructure the expanded input signal vector X2(n) into a2D array rather than using a multichannel setup.

� It is possible to realize the Volterra system as a joint-processestimator with a lattice-ladder structure instead of the directform realization as in (85).

- p. 141

Lattice Realization for Volterra

� These methods convert the input signal vector as given in (87)into a multichannel signal and apply orthogonalization ontothe multichannel signal while calculating the nonlinear output.

� We restructure the expanded input signal vector X2(n) into a2D array rather than using a multichannel setup.

� It is possible to realize the Volterra system as a joint-processestimator with a lattice-ladder structure instead of the directform realization as in (85).

- p. 142

Lattice Realization for Volterra

� We reshape the vector X2(n) into a 2D array using theproposed ordering as in Fig. 14.

� Fig. 15 shows the indexing arrangement we chose for theinput array.

� Fig. 16 depicts the 2-D orthogonal lattice structure-basednonlinear joint-process estimator.

� The depicted full-complexity nonlinear joint-process estimatoris complete with the lattice predictor part and the laddersection.

- p. 142

Lattice Realization for Volterra

� We reshape the vector X2(n) into a 2D array using theproposed ordering as in Fig. 14.

� Fig. 15 shows the indexing arrangement we chose for theinput array.

� Fig. 16 depicts the 2-D orthogonal lattice structure-basednonlinear joint-process estimator.

� The depicted full-complexity nonlinear joint-process estimatoris complete with the lattice predictor part and the laddersection.

- p. 142

Lattice Realization for Volterra

� We reshape the vector X2(n) into a 2D array using theproposed ordering as in Fig. 14.

� Fig. 15 shows the indexing arrangement we chose for theinput array.

� Fig. 16 depicts the 2-D orthogonal lattice structure-basednonlinear joint-process estimator.

� The depicted full-complexity nonlinear joint-process estimatoris complete with the lattice predictor part and the laddersection.

- p. 142

Lattice Realization for Volterra

� We reshape the vector X2(n) into a 2D array using theproposed ordering as in Fig. 14.

� Fig. 15 shows the indexing arrangement we chose for theinput array.

� Fig. 16 depicts the 2-D orthogonal lattice structure-basednonlinear joint-process estimator.

� The depicted full-complexity nonlinear joint-process estimatoris complete with the lattice predictor part and the laddersection.

- p. 143

Lattice Realization for Volterra

x(n)x(n-N+1)

x(n)x(n-N+2)

x(n)x(n-1) x(n-1)x(n-2)

x(n-1)x(n-N+1)

x(n-2)x(n-3) x(n-N+2)x(n-N+1)

x(n)2 x(n-1)2 x(n-2)2 x(n-N+2)2

x(n-N+1)2

x(n) x(n-1) x(n-2) x(n-N+2) x(n-N+1)

Figure 14: Ordering scheme for the 2-D input array.

- p. 144

Lattice Realization for Volterra

k1

k2

0 1 2

2N-1N+2N+1N

N-1N-23

M-1

M

M-2

Figure 15: Indexing scheme for the 2-D input array.

- p. 145

Lattice Realization for Volterra

P0,1

(1) P0,1

(2)

P1,2

(1)

PM-2,M-1

(1)

PM-1,M

(1)

P1,3

(2)

PM-2,M

(2)

P0,M-1

(M-1)

P1,M

(M-1)

P0,M

(M)

z -1

z -1

f (n)M-1

(1)

f (n)M-2

(2)

f (n)1

(M-1)

b (n)M

(M)b

M-1

(M-1)b2

(2)b1

(1)

x(n)

d(n)

e(n)

f (n)0

(M)

z -1z -N+2

z -1

c0(n) c

1(n) c

2(n) c

M-1(n) c

M(n)

b0

(0)

d(n)^

-+

+++++

x

x

Figure 16: Full complexity nonlinear joint-process estimator.

- p. 146

Lattice Realization for Volterra

+

+

Pp-m,p

(m)

f (n)p -n

(m-1)

b (n)p(m-1)

Gfp-n

(m)

Gbp

(m)

f (n)p -n

(m)

b (n)p(m)

Figure 17: Internal structure of the basic lattice module utilized in the nonlinear joint process esti-mator.

- p. 147

Lattice Realization for Volterra

� The backward prediction errors b(0)0 (n), b

(1)1 (n), . . . , b

(M)M (n)

generated using the 2D lattice filter are orthogonal to eachother (Kayran, 1996b).

� This result provides the main advantage of our structure overthe multichannel lattice structure in (Syed and Mathews,1994). For the structure in Syed and Mathews (1994), althoughthe backward prediction errors in different channels areorthogonal to each other, the elements within each channel arenot orthogonalized.

� However, in our structure the backward prediction errors arefully orthogonalized to each other. This will result in fasterand less input dependent adaptation.

- p. 147

Lattice Realization for Volterra

� The backward prediction errors b(0)0 (n), b

(1)1 (n), . . . , b

(M)M (n)

generated using the 2D lattice filter are orthogonal to eachother (Kayran, 1996b).

� This result provides the main advantage of our structure overthe multichannel lattice structure in (Syed and Mathews,1994). For the structure in Syed and Mathews (1994), althoughthe backward prediction errors in different channels areorthogonal to each other, the elements within each channel arenot orthogonalized.

� However, in our structure the backward prediction errors arefully orthogonalized to each other. This will result in fasterand less input dependent adaptation.

- p. 147

Lattice Realization for Volterra

� The backward prediction errors b(0)0 (n), b

(1)1 (n), . . . , b

(M)M (n)

generated using the 2D lattice filter are orthogonal to eachother (Kayran, 1996b).

� This result provides the main advantage of our structure overthe multichannel lattice structure in (Syed and Mathews,1994). For the structure in Syed and Mathews (1994), althoughthe backward prediction errors in different channels areorthogonal to each other, the elements within each channel arenot orthogonalized.

� However, in our structure the backward prediction errors arefully orthogonalized to each other. This will result in fasterand less input dependent adaptation.

- p. 148

Lattice Realization for Volterra

Simulation 1: The setting for the simulations is shown in Fig. 18.In the simulation the adaptive filter was run with the samememory length N as that of the second-order Volterra filter to beidentified.The Volterra system we identify has N = 4, hence there are 4linear and 10 quadratic coefficients.The desired response signal d(n) was obtained by adding whiteGaussian noise uncorrelated with the input signal to the output.The variance of the observation noise was chosen to obtain anSNR of 20 dB.

- p. 149

Lattice Realization for Volterra

� We present the learning curves in Fig. 19, for our latticestructure, the multichannel lattice structure inSyed and Mathews (1994) and the direct form transversalrealization (Mathews, 1991), all with RLS adaptation in Fig. 19.The error curves are mean squared for 500 cycles andλ = 0.9975.

� The novel lattice-based structure maintains the excellentnumerical behavior of the lattice models. Our structureexhibits better performance than both the transversalrealization and the multichannel lattice structure.

- p. 149

Lattice Realization for Volterra

� We present the learning curves in Fig. 19, for our latticestructure, the multichannel lattice structure inSyed and Mathews (1994) and the direct form transversalrealization (Mathews, 1991), all with RLS adaptation in Fig. 19.The error curves are mean squared for 500 cycles andλ = 0.9975.

� The novel lattice-based structure maintains the excellentnumerical behavior of the lattice models. Our structureexhibits better performance than both the transversalrealization and the multichannel lattice structure.

- p. 150

Lattice Realization for Volterra

Unknown system

Observation noise

(Lattice based) AdaptiveVolterra filter

x(n)

y(n)

Desired signal

d(n)^

-

+

Errorsignal

w(n)

d(n)

e(n)

+

+

Figure 18: The general setup for the adaptive second-order Volterra filter identification simulations.

- p. 151

Lattice Realization for Volterra

Figure 19: Learning curves for different models; (i) multichannel lattice structure, (ii) transversaldirect-form realization, (iii) model based on 2-D lattice structure.

- p. 152

Concluding Remarks

� This dissertation considered the design of a novelrepresentation for the discrete-time, time-invariant,finite-order Volterra filters.

� We also developed a novel extension of the unit impulseresponse to the case of the identification of nonlinear Volterrafilters.

� We applied the developed identification algorithmsuccessfully to the nonlinear communication channels andbaseband Volterra communication channels withcommunication signals as inputs.

- p. 152

Concluding Remarks

� This dissertation considered the design of a novelrepresentation for the discrete-time, time-invariant,finite-order Volterra filters.

� We also developed a novel extension of the unit impulseresponse to the case of the identification of nonlinear Volterrafilters.

� We applied the developed identification algorithmsuccessfully to the nonlinear communication channels andbaseband Volterra communication channels withcommunication signals as inputs.

- p. 152

Concluding Remarks

� This dissertation considered the design of a novelrepresentation for the discrete-time, time-invariant,finite-order Volterra filters.

� We also developed a novel extension of the unit impulseresponse to the case of the identification of nonlinear Volterrafilters.

� We applied the developed identification algorithmsuccessfully to the nonlinear communication channels andbaseband Volterra communication channels withcommunication signals as inputs.

- p. 152

Concluding Remarks

� This dissertation considered the design of a novelrepresentation for the discrete-time, time-invariant,finite-order Volterra filters.

� We also developed a novel extension of the unit impulseresponse to the case of the identification of nonlinear Volterrafilters.

� We applied the developed identification algorithmsuccessfully to the nonlinear communication channels andbaseband Volterra communication channels withcommunication signals as inputs.

- p. 153

Concluding Remarks

� Future work on identification might be by fusing the novelVolterra system representation with the use of randomsequences as inputs, rather than deterministic sequences.

� The novel representation might be also applied in the efficientimplementation of Volterra filters and in transform domainstructures.

� Another direction might be the use of the identificationalgorithm in the implementation of nonlinear compensatorsand nonlinear system inverses and equalization.

- p. 153

Concluding Remarks

� Future work on identification might be by fusing the novelVolterra system representation with the use of randomsequences as inputs, rather than deterministic sequences.

� The novel representation might be also applied in the efficientimplementation of Volterra filters and in transform domainstructures.

� Another direction might be the use of the identificationalgorithm in the implementation of nonlinear compensatorsand nonlinear system inverses and equalization.

- p. 153

Concluding Remarks

� Future work on identification might be by fusing the novelVolterra system representation with the use of randomsequences as inputs, rather than deterministic sequences.

� The novel representation might be also applied in the efficientimplementation of Volterra filters and in transform domainstructures.

� Another direction might be the use of the identificationalgorithm in the implementation of nonlinear compensatorsand nonlinear system inverses and equalization.

- p. 154

Concluding Remarks

� The work on the novel orthogonal quadratic Volterra filterrealization based on the 2D lattice structure can be adapted tothe realization of Volterra filter with higher ordernonlinearities.

� The orthogonal structure can be also utilized in the realizationof polynomial systems with feedback such as bilinear systems.

- p. 154

Concluding Remarks

� The work on the novel orthogonal quadratic Volterra filterrealization based on the 2D lattice structure can be adapted tothe realization of Volterra filter with higher ordernonlinearities.

� The orthogonal structure can be also utilized in the realizationof polynomial systems with feedback such as bilinear systems.

- p. 155

Tesekkürler

References

Agazzi, O., Messerschmitt, D. G. and Hodges,D. A., 1982. Nonlinear echo cancelation of datasignals, IEEE Trans. Communications, COM30(11),2421–2433.

Baik, H. K. and Mathews, V. J., 1993. Adaptive lat-tice bilinear filters, IEEE Trans. Signal Processing,41(6), 2033–2046.

Benedetto, S. and Biglieri, E., 1983. Nonlinearequalization of digital satellite channels, IEEEJournal on Selected Areas in Communications, SAC-1(1), 57–62.

Benedetto, S., Biglieri, E. and Castellani, V., 1987.Digital Transmission Theory, Prentice-Hall, NewJersey.

Benedetto, S., Biglieri, E. and Daffara, R., 1979.Modeling and performance evaluation of nonlin-ear satellite links–a Volterra series approach, IEEETransactions on Aerospace and Electronic Systems,AES15(4), 494–507.

155-1

Biglieri, E., Barberis, S. and Catena, M., 1988.Analysis and compensation of nonlinearities indigital transmission systems, IEEE Journal on Se-lected Areas in Communications, 6(1), 42–51.

Billings, S. A., 1980. Identification of nonlinearsystems–a survey, IEE Proceedings, 127(8), 272–285.

Boyd, S. and Chua, L., 1985. Fading memory andthe problem of approximating nonlinear opera-tors with Volterra series, IEEE Trans. on Circuitsand Systems, CAS32(11), 1150–1161.

Boyd, S., Tang, Y. S. and Chua, L. O., 1983. Measur-ing Volterra kernels (I), IEEE Trans. on Circuits andSystems, CAS-30(8), 571–577.

Chen, S., Billings, S. A. and Luo, W., 1989. Orthog-onal least squares methods and their applicationto nonlinear system identification, Int. J. Control,50, 1873–1896.

Cheng, C., 1998. Nonlinear communication sys-tem identification and compensation, Ph.D. the-sis, University of Texas, Austin, USA.

155-2

Cheng, C. and Powers, E. J., 2001. Optimal Volterrakernel estimation algorithms for a nonlinear com-munication system for PSK and QAM inputs,IEEE Trans. Signal Processing, 49(12), 147–163.

Ching, H. and Powers, E., 1993. Nonlinear channelequalization in digital satellite system, Proceedingsof GLOBECOM’93, volume 3, Houston, USA, pp.1639–1643.

Chua, L. and Liao, Y., 1983. Measuring Volterra ker-nels (II), International Journal of Circuit Theory andApplications, 17, 151–190.

Falconer, D. D., 1978. Adaptive equalization ofchannel nonlinearities in QAM data transmis-sion systems, Bell System Technology Journal, 57(7),2589–2611.

Frank, W. A., 1995. An efficient approximation tothe quadratic Volterra filter and its application torealtime loudspeaker linearization, Signal Process-ing, 45, 97–113.

Giannakis, G. B. and Serpedin, E., 2001. A bibliog-raphy on nonlinear system identification, SignalProcessing, 81(3), 533–580.

155-3

Hermann, R., 1990. Volterra modeling of digi-tal magnetic saturation recording channels, IEEETrans. Magnetics, 26(5), 2125–2127.

Kajikawa, Y., 2000. The adaptive Volterra filter: Itspresent and future, Electronics and Communicationsin Japan, 83(12), 51–61.

Kayran, A., 1996a. 2D ARMA lattice modellingusing two-channel AR lattice, Electronics Letters,32(16), 1434–1435.

Kayran, A., 1996b. Two-dimensional orthogonallattice structures for autoregressive modeling ofrandom fields, IEEE Trans. Signal Processing, 44(4),963–978.

Kayran, A. and Eksioglu, E., 2000. 2D FIR Wienerfilter realization using orthogonal lattice struc-ture, Electronics Letters, 36(12), 1078–1079.

Koh, T. and Powers, E. J., 1985. Second-orderVolterra filtering and its application to nonlin-ear system identification, IEEE Trans. on Acoust.Speech and Signal Processing, ASSP33(16), 1445–1455.

155-4

Korenberg, M. J. and Paarman, L. D., 1991. Or-thogonal approaches to time series analysis andsystem identification, IEEE Signal Proc. Magazine,8(3), 29–43.

Koukoulas, P. and Kalouptsidis, N., 1995. Nonlin-ear system identification using Gaussian inputs,IEEE Trans. Signal Processing, 43(8), 1831–1841.

Lazzarin, G., Pupolin, S. and Sarti, A., 1994. Non-linearity compensation in digital radio systems,IEEE Trans. Communications, 42, 988–998.

Lee, J. and Mathews, V. J., 1993. A fast recursiveleast squares adaptive second-order Volterra filterand its performance analysis, IEEE Trans. SignalProcessing, 41(3), 10871102.

Lee, Y. W. and Schetzen, M., 1965. Measurement ofthe Wiener kernels of a nonlinear system by cross-correlation, Int. J. Control, 2, 237–254.

Ling, F. and Proakis, J., 1984. A generalized mul-tichannel least-squares lattice algorithm basedon sequential processing stages, IEEE Trans. onAcoust. Speech and Signal Processing, ASSP32(2),381–389.

155-5

Manolakis, D., Ingle, V. and Kogon, S., 2000. Statis-tical and Adaptive Signal Processing, McGraw-Hill,Boston.

Marmarelis, P. Z. and Marmarelis, V. Z., 1978. Anal-ysis of Physiological Systems, Plenum Press, NewYork.

Marmarelis, V. Z. and Naka, K., 1972. White noiseanalysis of a neuron chain: an application of theWiener theory, Science, 175(5), 1276–1278.

Mathews, V., 1991. Adaptive polynomial filters,IEEE Signal Processing Mag., 8, 10–26.

Mathews, V. J. and Sicuranza, G. L., 2000. Polyno-mial Signal Processing, John Wiley & Sons.

Nowak, R. and Van Veen, B., 1994a. Efficient meth-ods for identification of Volterra filter models,Signal Processing, 38(3), 417–428.

Nowak, R. and Van Veen, B., 1994b. Random andpseudorandom inputs for Volterra filter identifi-cation, IEEE Trans. Signal Processing, 42(8), 2124–2135.

155-6

Ozden, M., Panayırcı, E. and Kayran, A., 1997.Identification of nonlinear magnetic channelswith lattice orthogonalisation, Electronics Letters,33(5), 376–377.

Ozden, M. T., Kayran, A. H. and Panayırcı, E.,1996a. Adaptive Volterra filtering with completelattice orthogonalization, IEEE Trans. Signal Pro-cessing, 44(4), 2092–2098.

Ozden, M. T., Kayran, A. H. and Panayırcı, E.,1996b. Satellite channel identification with latticeorthogonalisation, Electronics Letters, 32(4), 302–304.

Ozden, M. T., Kayran, A. H. and Panayırcı,E., 1998. Adaptive Volterra channel equaliza-tion with lattice ortogonalisation, IEE Proceedings–Communications, 145(2), 109–115.

Palm, G. and Poggio, T., 1985. The Volterra repre-sentation and the Wiener expansion: Validity andpitfalls, SIAM J. Appl. Math., 33, 195–216.

Parker, S. and Perry, F., 1981. A discrete ARMAmodel for nonlinear system identification, IEEETrans. Circuits and Systems, CAS28, 224–233.

155-7

Petrochilos, N. and Comon, P., 2000. Nonlinearchannel identification and performance analysis,IEEE International Conference on Acoustics, vol-ume 1, Istanbul, Turkey, pp. 209–212.

Ramponi, G., 1990. Bi–impulse response design ofisotropic quadratic filters, Proc. IEEE, 78(4), 665–677.

Raz, G. and Van Veen, B., 1998. Baseband Volterrafilters for implementing carrier based nonlineari-ties, IEEE Trans. Signal Processing, 46(1), 103–114.

Raz, G. M., 1998. Nonlinear system representation,identification and implementation, Ph.D. thesis,University of Wisconsin, Madison, USA.

Reed, M. and Hawksford, M., 1996. Identificationof discrete Volterra series using maximum lengthsequences, Proc. IEE, Circuits, Devices and Systems,143(5), 241–248.

Rugh, W. J., 1981. Nonlinear System Theory. TheVolterra-Wiener Approach, Johns Hopkins Univer-sity Press, Baltimore.

155-8

Saleh, A. A. M., 1981. Frequency-independent andfrequency-dependent nonlinear models of TWTamplifiers, IEEE Trans. Communications, 29(11),1715–1720.

Sandberg, I. W., 1983. The mathematical founda-tions of associated expansions for mildly nonlin-ear systems, IEEE Trans. Circuits Systems, 30, 441–445.

Sandberg, I. W., 1992. Uniform approximation withdoubly finite Volterra series, IEEE Trans. SignalProcessing, 40, 1438–1442.

Schetzen, M., 1965. Measurement of the kernels ofa non-linear system of finite order, Int. J. Control,1(3), 251–263.

Schetzen, M., 1974. A theory of non-linear systemidentification, Int. J. Control, 20(4), 577–592.

Schetzen, M., 1989. The Volterra and Wiener The-ories of Nonlinear Systems, New York: Wiley,1980. Reprint edition with additional publishedby Robert E. Krieger Co., Malabar, Florida.

155-9

Shi, Y. and Hecox, K. E., 1991. Nonlinear systemidentification by m-pulse sequences: Applicationto brainstem auditory evoked responses, IEEETrans. Biomedical Engineering, 438(9), 834–845.

Strang, G., 1998. Introduction to Linear Algebra,Wellesley Cambridge Press.

Syed, M. and Mathews, V., 1993. QR--decomposition based algorithms for adaptiveVolterra filtering, IEEE Transactions on Circuits andSystems I: Fundamental Theory and Applications,40(6), 372–382.

Syed, M. and Mathews, V., 1994. Lattice algorithmsfor recursive least squares adaptive second-orderVolterra filtering, IEEE Transactions on Circuits andSystems II: Analog and Digital Signal Processing,41(3), 202–214.

Tick, L. J., 1961. The estimation of transfer functionsof quadratic systems, Technometrics, 3(4), 563–567.

Tseng, C. and Powers, E. J., 1993. Nonlinear chan-nel equalization in digital satellite systems, Proc.GLOBECOM, volume 3, Houston, TX, USA, pp.1639–1643.

155-10

Tseng, C.-H., 1993. Advanced nonlinear systemidentification techniques and their applications toengineering problems, Ph.D. thesis, University ofTexas, Austin, USA.

Volterra, V., 1887. Sopra le funzioni che dipendonode altre funzioni, Rend. Regia Accademia dei Lincei,97–105.

Volterra, V., 1959. Theory of Functions and of Integraland Integro-Differential Equations, Dover Publica-tions, New York.

Wiener, N., 1958. Nonlinear Problems in Random The-ory, M.I.T. Press.

Zhou, G. and Giannakis, G., 1997. Nonlinear chan-nel identification and performance analysis withPSK inputs, Proc. 1st IEEE Sig. Proc. Workshop onWireless Comm., Paris, France, pp. 337–340.

155-11