Charla Colombia

download Charla Colombia

of 72

Transcript of Charla Colombia

  • 7/29/2019 Charla Colombia

    1/72

    1

    ADAPTIVE BLIND SIGNALPROCESSING:

    Blind Source Separation

    Guillermo Bedoya

    UNIVERSITATPOLITCNICA DE

    CATALUNYAUPC INPGrenoble

    Grup d Arquitectures

    Hardware Avanades[AHA]Laboratoire des Images et

    des Signaux [LIS]

  • 7/29/2019 Charla Colombia

    2/72

    2

    1. Introduction to Blind Signal Processing: Problems andapplications

    2. Blind Source Separation (BSS) : Statistical principles

    3. Application of Information theory to BSS

    Coffee Break

    4. Adaptive Learning Algorithms forBSS

    5. BSS of Nonlinear mixing models

    6. Hardware considerations

    OUTLINE

  • 7/29/2019 Charla Colombia

    3/72

    3

    1. INTRODUCTION TO BLINDSIGNAL PROCESSING

  • 7/29/2019 Charla Colombia

    4/72

    4

    Objective:

    To discuss the basic principles of Blind Source Separation

    in the context of Blind Signal Processing.

    INTRODUCTION

  • 7/29/2019 Charla Colombia

    5/72

    5

    DEFINITION

    1. INTRODUCTION TO BLIND SIGNAL PROCESSING

    For many authors the so-called Blind signal processing techniques

    include:

    BLIND SOURCE SEPARATION (BSS)

    Blind Signal Extraction (BSE)

    Blind Source Deconvolution (BSD)

  • 7/29/2019 Charla Colombia

    6/72

    6

    DEFINITION

    Blind Source Separation (BSS) and Independent Component

    Analysis (ICA) are emerging techniques ofarray processing

    and data analysis that aim to recoverunobservedsignals orsources from observed mixtures (typically, the output of anarray of sensors), exploiting only the assumption of MutualIndependence between the signals.

    1. INTRODUCTION TO BLIND SIGNAL PROCESSING

  • 7/29/2019 Charla Colombia

    7/72

    7

    DEFINITION II

    BSS / ICA

    Algorithm

    Environment

    Sensor Array

    Signal processing

    s1

    s2

    s3

    s1

    s2

    s3

    x1

    x2

    x3

    Speakers Microphones

    s y = x

    A B

    1. INTRODUCTION TO BLIND SIGNAL PROCESSING

  • 7/29/2019 Charla Colombia

    8/72

    8

    APPLICATIONS

    Radiating Sources Estimating: Applications to Airport Surveillance.

    Blind Beamforming.

    Blind Separation of multiple co-channel BPSK signals arriving to antenna arrays.

    1. INTRODUCTION TO BLIND SIGNAL PROCESSING

    Processing of Communication Signals:

    Biomedical Signal Processing:

    ECG & EEG.

    Monitoring:

    Multitag Contactless Identification Systems. Power Plant monitoring.

    Environmental and Bio-medical Chemical Detection and Identification.

    Alternative to Principal Components Analysis

  • 7/29/2019 Charla Colombia

    9/72

    9

    PRINCIPLES

    1. INTRODUCTION TO BLIND SIGNAL PROCESSING

    The simplest BSS Model: NoiselessLinear Instantaneous Mixing.

    We assume the existence ofj independent signals s1(t),,sj(t) and the

    same number of observations (or observed mixtures) x1(t),,xn(t),

    where number of sensors = number of sources, expressed as:

    xi[t] a11s1[t]++a1jsj[t]

    xn[t] ai1sj[t]++aijsj[t] For each i = 1, n.

    and i=j

    s1 x1

    A =

    Mixing Matrix

    (sensor array)

    s2

    sj

    x2

    xn

    x[t]=As[t]

  • 7/29/2019 Charla Colombia

    10/72

    10

    PRINCIPLES II

    1. INTRODUCTION TO BLIND SIGNAL PROCESSING

    s1

    sn

    y1

    yn

    = s y =x

    A B

    Unobserved signals Observations Estimated source signals

    =

    x[t]=(x1

    [t]xn[t])=As[t] y[t]=(y

    1

    [t]yn[t])=Bx[t]

    B = A-1

    y=BAs, C=BA, C=PD

  • 7/29/2019 Charla Colombia

    11/72

    11

    PRINCIPLES III

    1. INTRODUCTION TO BLIND SIGNAL PROCESSING

    The BSS problem exists in recovering the sources vectors(t) using only:

    The observed data x(t).

    The assumption of mutual independence between the entries of

    the input vectors(t).

    Some prior information about the probability distribution of the

    inputs.

  • 7/29/2019 Charla Colombia

    12/72

    12

    ECG

    1. INTRODUCTION TO BLIND SIGNAL PROCESSING

    APPLICATIONS (ECG)

  • 7/29/2019 Charla Colombia

    13/72

    13

    CDMA System with N-users:

    La seal recibida x es lasuperposicin de las seales

    ensanchadas de los N usuarios msruido aditivo gaussiano.

    b1

    b2

    bN

    .

    .

    .

    code1

    code2

    codeN

    L chips

    P1

    P2

    PN

    NOISE

    L chips

    1L1NNL1L nbx

    NLH

    BSS permite recuperar b a partir de x sin conocer H independenciaestadsitica

    1. INTRODUCTION TO BLIND SIGNAL PROCESSING

    APPLICATIONS (CDMA)

  • 7/29/2019 Charla Colombia

    14/72

    14

    APPLICATIONS (Image processing)

    Image

    decomposition

    ICA

    B

    Components

    processing

    Image

    Composition

    I xt yt

    X=

    I=n

    m/kx n/k

    kx k

    m

    xt

    1. INTRODUCTION TO BLIND SIGNAL PROCESSING

  • 7/29/2019 Charla Colombia

    15/72

    15

    APPLICATIONS (Image processing)

    1. INTRODUCTION TO BLIND SIGNAL PROCESSING

  • 7/29/2019 Charla Colombia

    16/72

    16

    2. BLIND SOURCE SEPARATION:STATISTICAL PRINCIPLES

  • 7/29/2019 Charla Colombia

    17/72

    17

    STATISTICAL MODEL

    2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES

    BSS exploits:

    SPATIAL DIVERSITY: BSS looks for structure across the sensors, not across

    the time.

    SAMPLES DISTRIBUTION.

    Statistical Model:

    MIXING MATRIX: Its columns are assumed to be linearly independent (so that

    it is invertible)

    SOURCE DISTRIBUTION: the distribution of each source is a Nuisance

    parameter, i.e., we are not primarily interested in it...

  • 7/29/2019 Charla Colombia

    18/72

    18

    LINEAR MIXING

    2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES

    If each source is assumed to have apdf denoted qi(.), the joint pdfq(s) of the

    source vectors is:

    q(s)=q1(s1) xx qn(sn)= qi(si)

    i=1,n

  • 7/29/2019 Charla Colombia

    19/72

    19

    LINEAR MIXING

    2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES

    s1

    s2

    x1

    x2

    y1

    y2z2

    z1

    x = As

    p1(s1) p2(s2)

    z = Wx

    1.Whiteningy=Uz

    2. Rotation

    s1

    s2

    Joint PDF

    p(s1, s2)

    TE zz I

    marginal PDF

    General approach

  • 7/29/2019 Charla Colombia

    20/72

    20

    CONSTRAINTS

    2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES

    whitening

    Rotation

    Joint PDF of different mixing

    matrices for two signals with uniform

    distribution

  • 7/29/2019 Charla Colombia

    21/72

    21

    CONSTRAINTS

    Joint PDF of different mixing matrices for twosignals with Gaussian distribution

    2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES

  • 7/29/2019 Charla Colombia

    22/72

    22

    INDEPENDENCE & DECORRELATION

    2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES

    Intuitivo Las variables y1 e y2son independiente si el valor de y1no aporta

    ninguna informacin acerca del valor de y2.

    Matemtico: funciones de densidad de probabilidad (pdf) Seap(y1,y2) la funcin de densidad de probabilidad conjunta de y1 e y2.

    Seap1(y1) la pdf marginal de y1

    y de forma similar para y2

    Entonces, y1 e y2 son independientes si y slo si:

    1 1 1 2 2( ) ( , )p y p y y d y

    1 2 1 1 2 2( , ) ( ) ( ).p y y p y p y

  • 7/29/2019 Charla Colombia

    23/72

    23

    INDEPENDENCE & DECORRELATION

    2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES

    Concept of Independence

    Math: Ify1 and y2 are independents

    Uncorrelatedness: h(), g() are equal to the Identity

    1 2 1 2( ), ( ) ( ) ( ) .E g y h y E g y E h y

    1 2 1 2, 0E y y E y E y

    Uncorrelatedness Independence

  • 7/29/2019 Charla Colombia

    24/72

    24

    INDEPENDENCE

    2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES

    Statistical measurements of Independence and Gaussianity:

    Kullback-Leibler Divergence (Signal distribution)

    Kurtosis or fourth order cumulant (Signal Gaussianity)

    Negentropy

    ( )

    | ( ) log ( )

    f

    K f g f dg

    y

    y yy

    24 2kurt( ) 3y E y E y

    4kurt( ) 3y E y

    Mutual

    Information (I) &

    Entropy (H)

    ;

  • 7/29/2019 Charla Colombia

    25/72

    25

    OBJECTIVE (CONTRAST) FUNCTIONS

    2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES

    BSS ALGORITHM:

    Contrast Function Optimization Method+

    SOME CONTRAST FUNTIONS:

    Maximum Likelihood

    Mutual Information or Entropy Maximization

    Orthogonal Contrast

    INFOMAX

    High-Order approximations

    Non linear cross correlations

    Minimize or maximize

    the measurement

    Measurement

  • 7/29/2019 Charla Colombia

    26/72

    26

    OBJECTIVE (CONTRAST) FUNCTIONS

    2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES

    Properties of the BSS/ICA method depend on both of the

    elements (Contrast function and optimization method). In

    particular:

    The statistical properties (e.g., consistency, asymptotic variance,robustness) of the ICA method depend on the choice of the objective

    function.

    The algorithmic properties (e.g., convergence speed, memoryrequirements, numerical stability) depend on the optimizationalgorithm.

  • 7/29/2019 Charla Colombia

    27/72

    27

    ENTROPY MAXIMIZATION CONTRAST

    2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES

    MUTUAL INFORMATION OR ENTROPY MAXIMIZATION

    Where y is an independent components vector

    Considering the whitening constraint:

    The entropy of y, H[y], is invariant under rotations.

    Separating algorithm.

    |ME K y y y

    cte.

    1 1

    N No

    ME i i

    i i

    H y H H y

    y y

    min MEB

    Bx

  • 7/29/2019 Charla Colombia

    28/72

    28

    MAXIMUM LIKELIHOOD CONTRAST

    2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES

    Maximum Likelihood:

    Where the PDF of = A-1x is known.

    The Maximum Likelihood contrast, maximizes the

    independence of the output components and minimizes

    the DISTANCE to the PDF of the data.

    1 1

    ( )| ( ) log

    ( ) ( )ML

    N N

    pK p d

    q s q s

    yy y s y y

  • 7/29/2019 Charla Colombia

    29/72

    29

    KURTOSIS

    Non gaussianity measurement (Central Limit Theorem)

    Kurtosis (fourth-order cumulant)

    Ify is unit variance

    kurtosis = 0 for random Gaussian variables.

    2

    4 2kurt( ) 3y E y E y

    4kurt( ) 3y E y

    2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES

  • 7/29/2019 Charla Colombia

    30/72

    30

    KURTOSIS

    Laplacian PDF

    Pos. Kurtosis

    leptokurtic

    SuperGaussian

    UNIFORM PDF

    Negative Kurtosisplatykurtic

    SubGaussian

    2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES

    21( )

    2

    yp y e

  • 7/29/2019 Charla Colombia

    31/72

    31

    3. APPLICATION OFINFORMATION THEORY TO BSS

  • 7/29/2019 Charla Colombia

    32/72

    32

    INTRODUCTION

    3. APPLICATIONS OF INFORMATION THEORY TO BSS

    Objective:

    To discuss the INFOMAX criterion and apply it to the

    problem of BSS

  • 7/29/2019 Charla Colombia

    33/72

    33

    INTRODUCTION TO INFORMATION THEORY

    3. APPLICATIONS OF INFORMATION THEORY TO BSS

    The binary Entropy function

    The entropy of a random variableXis:

    SupposeXis a binary random variable,

    Then the entropy ofXis,

    Since it depends onp, this is also writen sometimes as H(p)

    H(X)=- p(x) logp(x)i=1

    N

    H(X)=- p logp - (1-p) log(1-p)

    X=1 with probabilityp

    0 with probability 1-p

  • 7/29/2019 Charla Colombia

    34/72

    34

    INTRODUCTION TO INFORMATION THEORY

    3. APPLICATIONS OF INFORMATION THEORY TO BSS

  • 7/29/2019 Charla Colombia

    35/72

    35

    INTRODUCTION TO INFORMATION THEORY

    3. APPLICATIONS OF INFORMATION THEORY TO BSS

  • 7/29/2019 Charla Colombia

    36/72

    36

    RELATIVE ENTROPY AND MUTUAL INFORMATION

    3. APPLICATIONS OF INFORMATION THEORY TO BSS

  • 7/29/2019 Charla Colombia

    37/72

    37

    H(.) and I(.) applied to BSS I

    3. APPLICATIONS OF INFORMATION THEORY TO BSS

    si(t) y(t)x(t)

    A B

    Let si (t), i=1,2,,n be a set of statistically independent signals,

    x(t)=As(t)

    We desire to determine matrix B so that: y(t)=Wx(t)=WAs(t)

    recovers s(t) as fully as possible. We will take as a criterion the MutualInformation at the output H(y)

  • 7/29/2019 Charla Colombia

    38/72

    38

    H(.) and I(.) applied to BSS II

    H(y)= H(yi) - I(y1,,yN)i=1

    N

    If we maximize H(y),we should :

    1. Maximize each H(yi)

    2. Minimize I(y1,,yN)

    H(yi) are maximized when (and if) the otputs are uniformly distributed.

    The mutual information is minimized when the are all independent!

    3. APPLICATIONS OF INFORMATION THEORY TO BSS

  • 7/29/2019 Charla Colombia

    39/72

    393. APPLICATIONS OF INFORMATION THEORY TO BSS

    H(.) and I(.) applied to BSS III

    How to work with the outputs mutual information ?

    We consider the case of adapting a processing function gwhich operates on

    the scalarXusing a function Y = g(X) in order to maximize or minimize the

    mutual information betweenXand Y.

    But, achieving the MI miminimization and consecuently the

    independence, requires that ghave the form of the Cumulative Density

    Function (CDF) ofsi.

    y(t)x(t) B g

  • 7/29/2019 Charla Colombia

    40/72

    403. APPLICATIONS OF INFORMATION THEORY TO BSS

    H(.) and I(.) applied to BSS IV

    g(X) = g(X,b) g(x)=1/1+e-bx

    We may not know the pdf of X, however, we can assume a particularfunction form, e.g.,Assuming thatp(si) is super-Gaussian :

  • 7/29/2019 Charla Colombia

    41/72

    41

    H(.) and I(.) applied to BSS V

    Based on the last assumption (p(si) super-Gaussian), we can write:

    H(yi)= -E[logp(yi)]

    where we have

    p(yi)=p(bxi) / yi/ bxi so that

    H(yi)= -E[logp(bxi) / yi/ ui]and

    H(y)= H(yi) - I(y)i=1

    N

    H(y)=- E[logp(bxi) / yi/ ui ] - I(y)i=1

    N

    ;

    3. APPLICATIONS OF INFORMATION THEORY TO BSS

  • 7/29/2019 Charla Colombia

    42/72

    42

    H(.) and I(.) applied to BSS V

    We want to determine B to maximize the joint entropy of the output H(y). So, an

    adaptive scheme is to take

    b

    Hb =

    In our specific case we have,

    g(x) = y= 1/1+e-bx ; = by(1-y) ; = y(1-y)[1+by(1-2y)]

    b

    (ln y/ x) = yx

    -1

    b

    yx

    3. APPLICATIONS OF INFORMATION THEORY TO BSS

    yx b

    yx

  • 7/29/2019 Charla Colombia

    43/72

    43

    From,

    b =

    b = y(1-y)[1+by(1-2y)]

    so,

    b 1/b + x(1-2y) ; b = b-1 + (1-2y) x

    And the weight update rule can be

    b[k+1] = b[k] + b b

    yx

    -1

    byx

    yx

    -1

    Score function

  • 7/29/2019 Charla Colombia

    44/72

    44

    General form

    H(y)=- E[logp(bxi) / yi/ ui ] - I(y)i=1

    NFrom the term:

    H(y)B = B -T - (u) xT

    (u)= -

    p(u)u

    p(u)

    Score function

    3. APPLICATIONS OF INFORMATION THEORY TO BSS

  • 7/29/2019 Charla Colombia

    45/72

    45

    METHOD OVERVIEW

    2. BLIND SOURCE SEPARATION: STATISTICAL PRINCIPLES

    s yx

    A B

    1. Initialization of the separating matrix B (randomly).

    2. Generation of the outputs y. y=Bx

    3. Measurement of the outputs independence (contrast function).

    For count = 1 to Number of iterations (nit):

    Change the matrix B : ( ) and repeat until nit according to:

    B[k+1] = B[k] + bB

    y= B[k+1] x

    4. End of algorithm.

    H(y)B

  • 7/29/2019 Charla Colombia

    46/72

    46

  • 7/29/2019 Charla Colombia

    47/72

    47

    4. ADAPTIVE LEARNINGALGORITHMS FOR BSS

  • 7/29/2019 Charla Colombia

    48/72

    48

    ADAPTIVE LEARNING

    4. ADAPTIVE LEARNING ALGORITHMS FOR BSS

    BSS Framework:

    En espacios eucldeos, la ley de aprendizaje basada en el gradiente

    (estocstico) permite alcanzar una solucin:

    El algoritmo es estable cuando el contraste es mnimo. Problema

    el espacio de las matrices invertibles B no es eucldeo.

    1( )t t t B B B

    s yx

    A B

  • 7/29/2019 Charla Colombia

    49/72

    49

    GRADIENT TECHNIQUES

    4. ADAPTIVE LEARNING ALGORITHMS FOR BSS

    Gradiente convencional: espacio eucldeo

    la transformacin infinitesimal de B se expresa como

    Gradiente relativo:

    la transformacin infinitesimal de B se expresa como

    B I B B B

    B B

    B

    B B

    B

    Espacio de las

    matrices de separacin

  • 7/29/2019 Charla Colombia

    50/72

    50

    RELATIVE GRADIENT

    4. ADAPTIVE LEARNING ALGORITHMS FOR BSS

    Gradiente convencional: espacio eucldeo

    Gradiente relativo:

    ( | () ( ) ( ) )o B B B

    , 1

    donde |N

    T

    ij ij

    i j

    Traza A B

    A B A B y ( )ijB

    B

    ( | () ( ) ( ) )o B B B B B ( | () ( ) ( ) )o B B B B

  • 7/29/2019 Charla Colombia

    51/72

    51

    RELATIVE GRADIENT

    4. ADAPTIVE LEARNING ALGORITHMS FOR BSS

    Comparacin:

    ( ) ( ) T B B B

    BB B

    B

    Espacio de las

    matrices de separacin

    ( ) B

    ( ) B

  • 7/29/2019 Charla Colombia

    52/72

    52

    NATURAL GRADIENT

    4. ADAPTIVE LEARNING ALGORITHMS FOR BSS

    Gradiente natural:

    Planteamiento: encontrar dB que minimiza

    Comparacin:

    ( |) ( ) ( )d d B B B B B2 2teniendo en cuenta que d B

    BB B

    B Espacio de las

    matrices de separacin

    ( ) B( ) B

    min ( )d

    d B B BdB B

    ( ) B gradiente natural

    ( ) ( ) ( ) T B B B B B B

  • 7/29/2019 Charla Colombia

    53/72

    53

    LEARNING ALGORITHMS

    4. ADAPTIVE LEARNING ALGORITHMS FOR BSS

    Algoritmo basado en el Gradiente convencional:

    En el caso de emplear ML:

    Teniendo en cuenta que

    Por tanto

    y que

    1 ( )t t t B B B

    -1

    1( )

    TT

    t t t B B y y I B

    1( ) log ( ) ( )= = ( )( )

    y k ki i

    j i i jij i i

    p q y d q yx y

    B q y

    y y B y

    1 1

    ( )( ) | ( ) ( ) log

    ( ) ( )

    y

    ML ML y y

    N N

    pK p q p d

    q s q s

    yy Bx y s y y

    .( ) ( ) log ( ) log det( )

    cte

    y yH p p d y y y y B

    1( )T

    H B y B

  • 7/29/2019 Charla Colombia

    54/72

    54

    LEARNING ALGORITHMS

    Algoritmo basado en el Gradiente natural:

    En el caso de emplear:

    Algoritmo basado en el gradiente relativo

    BB B

    B Espacio de las

    matrices de separacin

    ( ) B( ) B

    min ( )d

    d B

    B BdB B

    ( ) B gradiente natural

    1 ( )t t t B B B

    1 ( )T

    t t t B B y y I B

    ML y

    1, ,

    ( )siendo ( ) "score function"

    ( )T i i

    i i i N

    q y

    q y

    y

    1( )T

    t t

    B B y y I

    ( ) ( ) ( ) T B B B B B B

  • 7/29/2019 Charla Colombia

    55/72

    55

    5. BSS OF NON LINEAR MIXINGMODELS

  • 7/29/2019 Charla Colombia

    56/72

    56

    THE POST NON-LINEAR MODEL

    Given a set of sources s linearly mixed by means the matrix A, we have thelinear mixing part described by:

    The invertible non-linear transfer function f(.) is represented by:

    were e represents the observations at the output of the sensors system. The

    sources s are recovered if the non-linear function gi(ei) and the de-mixing matrix

    B are the inverse functions offi(xi) and A respectively.

    5. BSS OF NON-LINEAR MIXING MODELS

    sAx

    )(xfe

    Af1

    f2

    x1

    x2

    s1

    s2

    e1

    e2

    g1

    g2

    y1

    y2

    B

    u1

    u2

  • 7/29/2019 Charla Colombia

    57/72

    57

    THE POST NON-LINEAR MODEL

    Consequently, the signals are related by the equation:

    where,

    The output is described by:

    In order to separate the observations into statistically independent components,

    we use a measure of the dependence degree of the components of u. When

    this measure ofu reaches an absolute minimum, we ensure the independence

    of the components.

    5. BSS OF NON-LINEAR MIXING MODELS

    )(egy)(( sAfgy

    )(( sAfgWyWu

  • 7/29/2019 Charla Colombia

    58/72

    58

    THE POST NON-LINEAR MODEL

    5. BSS OF NON-LINEAR MIXING MODELS

    The mutual information I(.), can be employed as independence measure of the

    components ofu. The mutual information (MI) of the output, can be expressed

    via the signal entropy H:

    where,

    The MI is non-negative and zero when the components of u are statistically

    independent from one other.

    The MI has a property that, if we perform invertible transformations on theindividual components ofu, resulting in zi= i(ui), the mutual information of the

    components zi is equal to the mutual information of the components ui.

    i

    HiyHI )()()( uu

    uuuu dppH )(log)()(

  • 7/29/2019 Charla Colombia

    59/72

    59

    REALISTIC EXAMPLE

    5. BSS OF NON-LINEAR MIXING MODELS

    Linear mixing

    stage

    Nonlinear

    distortion

    Nonlinear

    compensation

    Linear de-mixing

    stage

    A B

    f1

    f2

    g1

    g2

    ai

    aj

    i

    j

    ID1

    ID2

    gID1

    gID2

    ID= a+ b*ln(ai + Kijaj ) ;zi/zj

    gID = e(IDa)/ b

  • 7/29/2019 Charla Colombia

    60/72

    60

    1. Initialization B = I, gID = ID, a and b.

    2. Loop

    1. Compute outputs by: y =Bx;

    2. Estimation of parameters

    a[k+1]= a[k]+ stepsize a

    b[k+1]= b[k]+ stepsize b

    3. Normalization

    4. Linear BSS algorithm: b[k+1]=b[k]+ stepsize b

    5. Repeat until convergence

    ALGORITHM

    5. BSS OF NON-LINEAR MIXING MODELS

  • 7/29/2019 Charla Colombia

    61/72

    61

    6. HARDWARECONSIDERATIONS

  • 7/29/2019 Charla Colombia

    62/72

    62

    OVERVIEW

    HARDWARE IMPLEMENTATION:

    DSPbased implementation

    FPGA

    Fully Analog system Integration

    HYBRID INTEGRATION

    ASIC with a DSP core associated to the sensor array.

    6. HARDWARE CONSIDERATIONS

  • 7/29/2019 Charla Colombia

    63/72

    63

    DSP IMPLEMENTATION

    A higher flexibility at a low cost system can be obtained by using a DSP for

    software fast algorithm implementation

    Finite Precision errors

    Fixed point - floating point

    Number of bits

    Analog to Digital conversion

    Finite word-length used to store all internal algorithmic quantities

    6. HARDWARE CONSIDERATIONS

  • 7/29/2019 Charla Colombia

    64/72

    64

    DSP ARCHITECTURE

    6. HARDWARE CONSIDERATIONS

  • 7/29/2019 Charla Colombia

    65/72

    65

    ANALOG CMOS CIRCUIT IMPLEMENTATION

    6. HARDWARE CONSIDERATIONS

    The INFOMAX algorithm finds the un-mixingmatrix B by means of maximizing the joint

    entropy H(y) of the outputs. Its learning rule

    (using te natural gradient) is:

    B BBT = [I-(u)uT]BH(y)

    B

  • 7/29/2019 Charla Colombia

    66/72

    66

    MULTIPLIER CIRCUIT

    The rule can be writen as:

    u = BX; SUM=uT B

    B = [B- (u)SUM]

    6. HARDWARE CONSIDERATIONS

  • 7/29/2019 Charla Colombia

    67/72

    67

    WEIGHT UPDATE CIRCUIT

    6. HARDWARE CONSIDERATIONS

    Vout = outp-outm = [ (Vc1 - Vc2)/sC](vip - vim)

  • 7/29/2019 Charla Colombia

    68/72

    68

    PROPUESTA DE TRABAJO !

  • 7/29/2019 Charla Colombia

    69/72

    69

    PROPUESTA

    IMPLEMENTACION HARDWARE DE UN ALGORITMO DE SEPARACION CIEGA

    DE FUENTES APLICADO A PROCESADO DE VOZ.

    REQUISITOS BASICOS:

    2 Estudiantes (buen nivel de Ingls).

    Conocimientos de MATLAB, C/C++ y Assembler.

    Conocimientos de Estadstica Multivariable.

    Tarjeta y plataforma de desarrollo para DSP.

    DURACION Y FORMA DE TRABAJO:

    5 Meses. El algoritmo ya esta diseado y Listo!

    Cx via INTERNET (recoleccin bibliogrfica, algoritmia,

    etc) y dirigido por docente CUTB.

    IMPLEMENTACION HARDWARE DE UN ALGORITMO DE SEPARACION

  • 7/29/2019 Charla Colombia

    70/72

    70

    CIEGA DE FUENTES APLICADO A PROCESADO DE VOZ.

    PLAN GENERAL DE TRABAJO:

    Recoleccin Bibliogrfica y contextualizacin (- 20 dias).

    Simulacin en MATLAB del algoritmo

    Separar dos o ms seales de voz (- 10 dias).

    Simulacin del algoritmo en C/C++

    Separar dos o ms seales de voz (- 30 dias).

    Desarrollo del algoritmo en la Tarjeta DSP (90 dias).

    Seleccion y adquisicin de la tarjeta de desarrollo.

    Definicin de los parmetros de diseo.

    Generacin de reportes cada 20 dias (cada uno ser un

    captulo del proyecto de final de carrera).

  • 7/29/2019 Charla Colombia

    71/72

    71

    GRACIAS !

  • 7/29/2019 Charla Colombia

    72/72

    ADAPTIVE BLIND SIGNALPROCESSING:

    Blind Source Separation

    Guillermo Bedoya

    UNIVERSITATPOLITCNICA DE

    Grup d Arquitectures

    Hardware Avanades [AHA]

    Laboratoire des Images et

    des Signaux [LIS]