Contributions to audio source separation and content...

48
Contributions to audio source separation and content description E. Vincent METISS Team, Inria Rennes - Bretagne Atlantique E. Vincent () HDR 23/11/2012 1 / 31

Transcript of Contributions to audio source separation and content...

Page 1: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Contributions to audio source separation

and content description

E. Vincent

METISS Team, Inria Rennes - Bretagne Atlantique

E. Vincent () HDR 23/11/2012 1 / 31

Page 2: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Introduction

Career path

��������

����� ����

�������������������

������

��������

��������

��������������

����������

!"� ����#�

�������������$�

�$�%���&������

"��������

'��������(

���������������

��� �������

��������%���)�*�+*�

����

��,)�����$� "�����������

���� ����

��������*

������

���-�������.

'���,�

'������+

/����+��,

����

��������������� ������

����,�)��,��"��

�������� �������� ���� ���� ���� ����

"������'��

��������������� ���������������������� ���� �������� ����

����

E. Vincent () HDR 23/11/2012 2 / 31

Page 3: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Introduction

Audio in the real world

The audio modality is essential in daily situations: spoken communication,TV, music, entertainment. . .

But audio scenes are often more complex than we would like!

Ex: TV series

E. Vincent () HDR 23/11/2012 3 / 31

Page 4: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Introduction

Audio in the real world

The audio modality is essential in daily situations: spoken communication,TV, music, entertainment. . .

But audio scenes are often more complex than we would like!

Ex: TV series

Many sound sources: Speech, music, background noise.

E. Vincent () HDR 23/11/2012 3 / 31

Page 5: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Introduction

Audio in the real world

The audio modality is essential in daily situations: spoken communication,TV, music, entertainment. . .

But audio scenes are often more complex than we would like!

Ex: TV series

Many sound sources: Speech, music, background noise.

Much information: Who is speaking? What is he saying?

E. Vincent () HDR 23/11/2012 3 / 31

Page 6: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Introduction

Audio in the real world

The audio modality is essential in daily situations: spoken communication,TV, music, entertainment. . .

But audio scenes are often more complex than we would like!

Ex: TV series

Many sound sources: Speech, music, background noise.

Much information: Who is speaking? What is he saying?

Where is he? How stressed is he?

E. Vincent () HDR 23/11/2012 3 / 31

Page 7: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Introduction

Audio in the real world

The audio modality is essential in daily situations: spoken communication,TV, music, entertainment. . .

But audio scenes are often more complex than we would like!

Ex: TV series

Many sound sources: Speech, music, background noise.

Much information: Who is speaking? What is he saying?

Where is he? How stressed is he?

What’s the music style? The bombing rate?

E. Vincent () HDR 23/11/2012 3 / 31

Page 8: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Introduction

Audio in the real world

The audio modality is essential in daily situations: spoken communication,TV, music, entertainment. . .

But audio scenes are often more complex than we would like!

Ex: TV series

Many sound sources: Speech, music, background noise.

Much information: Who is speaking? What is he saying?

Where is he? How stressed is he?

What’s the music style? The bombing rate?

What is happening? What’s gonna happen next?

E. Vincent () HDR 23/11/2012 3 / 31

Page 9: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Introduction

General goal and stakes

We want to:

enhance the sound sources of interest

extract the corresponding information

Wide range of applications, including:

high-fidelity hearing aids and mobile communications,

voice applications, multimedia document indexing, music search,

3D audio rendering, repurposing, interactive applications. . .

E. Vincent () HDR 23/11/2012 4 / 31

Page 10: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Introduction

General goal and stakes

We want to:

enhance the sound sources of interest: source separationextract the corresponding information: content description

Wide range of applications, including:

high-fidelity hearing aids and mobile communications,

voice applications, multimedia document indexing, music search,

3D audio rendering, repurposing, interactive applications. . .

E. Vincent () HDR 23/11/2012 4 / 31

Page 11: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Introduction

Part 1. Audio source separation

Part 2. Audio content description

Part 3. Research directions

E. Vincent () HDR 23/11/2012 5 / 31

Page 12: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Introduction

Part 1. Audio source separation

Part 2. Audio content description

Part 3. Research directions

E. Vincent () HDR 23/11/2012 5 / 31

Page 13: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio source separation

Audio source separation: the basics

Additive mixing:

x(t) =J∑

j=1

cj(t)x(t): multichannel mixturecj(t): jth spatial source image

(Not so) special case: point sources

cj(t) = aj ⋆ sj(t)aj(τ): mixing filtersj(t): jth source signal

Goal: estimate cj(t) given x(t).

E. Vincent () HDR 23/11/2012 6 / 31

Page 14: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio source separation

Evolution of the research focus

�����

����

�����

���������

�������������

��������

����������

�����������

��������������

�����������

���

���� !����

�" "#���$$���

%���� ����������

����������&

��������������

���� !����

��'��!� ��

�$�������

E. Vincent () HDR 23/11/2012 7 / 31

Page 15: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio source separation

Evolution of the research focus

�����

����

�����

���������

�������������

��������

����������

�����������

��������������

�����������

���

���� !����

�" "#���$$���

%���� ����������

����������&

��������������

���� !����

��'��!� ��

�$�������

(���$���

E. Vincent () HDR 23/11/2012 7 / 31

Page 16: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio source separation

Spatial and spectral cues (1)

Standard principle:

work in the time-frequency domain

x(n, f ) =J∑

j=1

cj(n, f )x(n, f ): vector of mixture TF coeff.cj(n, f ): jth source spatial image TF coeff.

for point sources, replace convolution by narrowband multiplication

cj(n, f ) = aj(f ) sj(n, f )

cj(n, f ): jth source spatial image TF coeff.aj(f ): mixing coefficientssj(n, f ): jth source TF coeff.

. . .

E. Vincent () HDR 23/11/2012 8 / 31

Page 17: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio source separation

Spatial and spectral cues (2)

Standard principle:

. . .

estimate aj(f ) and sj(n, f ) by time-frequency clustering of spatialcues [Zibulevsky, Rickard, Gribonval. . . ]

Left source s1(n,f)

n (s)

f (k

Hz)

~

0 0.5 10

2

4

dB

0

20

40

Center source s2(n,f)

n (s)

f (k

Hz)

~

0 0.5 10

2

4

dB

0

20

40

Right source s3(n,f)

n (s)

f (k

Hz)

~

0 0.5 10

2

4

dB

0

20

40

Left mixture x1(n,f)

n (s)

f (k

Hz)

~

0 0.5 10

2

4

dB

0

20

40

Right mixture x2(n,f)

n (s)

f (k

Hz)

~

0 0.5 10

2

4

dB

0

20

40

Incoming sound direction

n (s)

f (k

Hz)

0 0.5 10

2

4

deg

rees

−20

0

20

exploit additional spectral cues to separate overlapping sources orsources from the same direction [Benaroya, Virtanen, Vincent. . . ]

E. Vincent () HDR 23/11/2012 9 / 31

Page 18: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio source separation

Contributions and positioning

TF representation STFTAdaptive

[Vincent et al., 2005]Auditory-motivated

[PhD Duong]

Spatial modelInstantaneous

Anechoic

Convolutive

Wideband[Kowalski et al., 2010]

Full-rank[PhD Duong]

Spectral modelBinary

ℓ1 ℓp

[Vincent et al., 2007]

Multilevel NMF[postdoc Ozerov]

Gaussian GMMNMF

EstimationTwo-stage

Joint MLConvex penalties

[PhD Ito, Benichoux]MAP

[PhD Duong]

Consistent Wiener[NTT patent]

Fast EM[postdoc J. Thiemann]

Online EM[postdoc L. Simon]

...

E. Vincent () HDR 23/11/2012 10 / 31

Page 19: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio source separation

Contributions and positioning

TF representation STFTAdaptive

[Vincent et al., 2005]Auditory-motivated

[PhD Duong]

Spatial modelInstantaneous

Anechoic

Convolutive

Wideband[Kowalski et al., 2010]

Full-rank[PhD Duong]

Spectral modelBinary

ℓ1 ℓp

[Vincent et al., 2007]

Multilevel NMF[postdoc Ozerov]

Gaussian GMMNMF

EstimationTwo-stage

Joint MLConvex penalties

[PhD Ito, Benichoux]MAP

[PhD Duong]

Consistent Wiener[NTT patent]

Fast EM[postdoc J. Thiemann]

Online EM[postdoc L. Simon]

...

E. Vincent () HDR 23/11/2012 10 / 31

Page 20: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio source separation

The rank-1 spatial model

Former state-of-the-art: narrowband approximation

cj(n, f ) = aj(f ) sj(n, f )

cj(n, f ): jth source spatial image TF coeff.aj(f ): Fourier transform of aj(τ)sj(n, f ): jth source TF coeff.

In the Gaussian (variance) modeling framework,

sj(n, f ) ∼ N (0, vj(n, f )) ⇒ cj(n, f ) ∼ N (0, vj(n, f )aj(f )aj(f )H)

This rank-1 model essentially represents the apparent spatial direction ofsound at frequency f .

Problem: reverberation induces echoes from all directions. The notion ofmixing filter aj(τ) does not even make sense for diffuse sources.

E. Vincent () HDR 23/11/2012 11 / 31

Page 21: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio source separation

Proposed full-rank spatial model

Proposed model [PhD Duong]:

cj(n, f ) ∼ N (0, vj(n, f )Σj(f ))

with Σj(f ) full-rank spatial covariance matrix.

Represents both the spatial direction and the spatial width of the source.

Derived an expectation-maximization (EM) algorithm for ML estimation.

Results on two-channel mixturesof three sources

50 130 250 5000

2

4

6

8

10

12

reverberation time (ms)

SD

R (

dB

)

full−rank

rank−1

binaryl1 min.

E. Vincent () HDR 23/11/2012 12 / 31

Page 22: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio source separation

Conventional NMFFormer state-of-the-art: nonnegative matrix factorization (NMF)

vj(n, f ) =∑

k

wjk(f )hjk(n)

Problem: either too rigid (wjk(f ) fixed) or prone to overfitting (wjk(f )adaptive).

E. Vincent () HDR 23/11/2012 13 / 31

Page 23: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio source separation

Proposed multilevel NMF (1) [Vincent 2007, postdoc Ozerov]

E. Vincent () HDR 23/11/2012 14 / 31

Page 24: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio source separation

Proposed multilevel NMF (2)

Can handle new constraints: harmonicity, smooth envelope, attack type. . .

Derived a flexible EM algorithm for joint estimation of all layers, whetherfixed or adaptive ⇒ FASST Toolbox.

Results on two-channel mixturesof three or foursources(SiSEC 2010)

Spatial, spectral, and Averagetemporal constraints SDR (dB)

rank spec temp 5 cm 1 m

1 2.2 2.52 2.0 3.01 X 2.2 2.82 X 2.3 3.21 X 2.4 2.62 X 2.1 2.91 X X 2.5 3.92 X X 2.3 5.0

Also best general algorithm for the separation of music recordings inSiSEC 2011.

E. Vincent () HDR 23/11/2012 15 / 31

Page 25: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio source separation

Evaluation: a transversal activity

Complete evaluation methodology for audio source separation:

formalization of audio source separation tasks [Vincent et al., 2007]

definition of objective/subjective evaluation criteria [postdoc Emiya]⇒ BSS Eval & PEASS Toolboxes

computation of theoretical performance bounds [Vincent et al., 2007].

Co-founded two series of evaluation campaigns:

SASSEC/SiSEC (source separation): 119 entries since 2007

CHiME (noise-robust speech recognition): 13 in 2011, again in 2013

Impact:

helped the adoption of common problems, datasets and metrics,

helped focus on the remaining challenges: lack of spatial diversity,reverberation, source movements, background noise.

E. Vincent () HDR 23/11/2012 16 / 31

Page 26: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio source separation

Are we there yet?

mix vocals drums bass piano(separation by FASST)

E. Vincent () HDR 23/11/2012 17 / 31

Page 27: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio source separation

Are we there yet?

mix vocals drums bass piano(separation by FASST)

This level of quality is sufficient for many signal enhancement/remixingapplications.

Ongoing industrial transfer to Canon Inc., Audionamix SA and MAIASARL.

E. Vincent () HDR 23/11/2012 17 / 31

Page 28: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio source separation

Are we there yet?

mix vocals drums bass piano(separation by FASST)

This level of quality is sufficient for many signal enhancement/remixingapplications.

Ongoing industrial transfer to Canon Inc., Audionamix SA and MAIASARL.

Is it sufficient for content description?

E. Vincent () HDR 23/11/2012 17 / 31

Page 29: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio content description

Part 1. Audio source separation

Part 2. Audio content description

Part 3. Research directions

E. Vincent () HDR 23/11/2012 18 / 31

Page 30: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio content description

Audio content description: the basics

Audio content description techniques do not operate on the signals directlybut on derived features, e.g., Mel frequency cepstral coefficients (MFCCs).

Classification/transcription most often relies on probabilistic acousticmodels of the features, e.g., Gaussian mixture models (GMMs).

E. Vincent () HDR 23/11/2012 19 / 31

Page 31: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio content description

Audio content description: the basics

Audio content description techniques do not operate on the signals directlybut on derived features, e.g., Mel frequency cepstral coefficients (MFCCs).

Classification/transcription most often relies on probabilistic acousticmodels of the features, e.g., Gaussian mixture models (GMMs).

Two stages: training and decoding.

�����

�������� �������

�������

�����

�������� �������

������������������ �

E. Vincent () HDR 23/11/2012 19 / 31

Page 32: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio content description

Audio content description: the basics

Audio content description techniques do not operate on the signals directlybut on derived features, e.g., Mel frequency cepstral coefficients (MFCCs).

Classification/transcription most often relies on probabilistic acousticmodels of the features, e.g., Gaussian mixture models (GMMs).

Two stages: training and decoding.

�����

�������� �������

�������

�����

�������� �������

������������������ �

Problem: matched training/test paradigm, works only for clean data.

How can we reduce the mismatch for noisy/mixture data?

E. Vincent () HDR 23/11/2012 19 / 31

Page 33: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio content description

Conventional techniques for noise robustness

Feature compensation:good separation but of-ten increased mismatch

����������������

������

�����������������

�����

�����������������

��������

����������������������

Training data coverage:better match but hugetraining set needed

�������

�������� ������

�������

�������� ������

������������������ �

�������

Noise adaptive training[Deng, 2000]: combinesboth advantages, largetraining set still needed

����������������

������

�����������������

��������

����������������������

����������������

�����������������

������

E. Vincent () HDR 23/11/2012 20 / 31

Page 34: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio content description

Uncertainty propagation and decoding

Emerging paradigm: estimate and propagate confidence values representedby Gaussian posterior distributions [Deng, Astudillo, Kolossa. . . ].

����������������

������

�����������������

����������������������

E. Vincent () HDR 23/11/2012 21 / 31

Page 35: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio content description

Uncertainty propagation and decoding

Emerging paradigm: estimate and propagate confidence values representedby Gaussian posterior distributions [Deng, Astudillo, Kolossa. . . ].

����������������

������

�����������������

����������������������

Σs ML or heuristicBayesian

[postdoc Adiloglu]

(s, Σs) → (y, Σy)Moment matching,

unscented transform. . .

DecodingUncertainty decoding,

modified imputation. . .

Training Clean dataUncertainty training

[postdoc Ozerov]

E. Vincent () HDR 23/11/2012 21 / 31

Page 36: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio content description

Uncertainty propagation and decoding

Emerging paradigm: estimate and propagate confidence values representedby Gaussian posterior distributions [Deng, Astudillo, Kolossa. . . ].

����������������

������

�����������������

����������������������

Σs ML or heuristicBayesian

[postdoc Adiloglu]

(s, Σs) → (y, Σy)Moment matching,

unscented transform. . .

DecodingUncertainty decoding,

modified imputation. . .

Training Clean dataUncertainty training

[postdoc Ozerov]

E. Vincent () HDR 23/11/2012 21 / 31

Page 37: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio content description

Bayesian uncertainty estimator (1)

Conventional ML uncertainty estimator:

p(s|x) = p(s|x, θ) with θ = arg max p(x|θ)

Proposed Bayesian uncertainty estimator [postdoc Adiloglu]:

p(s|x) =

∫p(s, θ|x) dθ

s: target source STFT coeff.x(t): mixture STFT coeff.θ: separation model parameters

Derived a tractable variational Bayesian (VB) EM approximation.

Similar to conventional ML-EM, but update posterior parameterdistributions instead of parameter values.

E. Vincent () HDR 23/11/2012 22 / 31

Page 38: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio content description

Bayesian uncertainty estimator (2)

Proposed a proof-of-concept noise-robust speaker identification benchmarkbased on the CHiME domestic noise data.

Results:

−6 −3 0 3 6 90

20

40

60

80

100

SNR (dB)

% C

orr

ect

VB−UP

ML−UP

noisy data

E. Vincent () HDR 23/11/2012 23 / 31

Page 39: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio content description

Uncertainty training (1)

Conventional training approaches:

training on clean data,

training on noisy data without uncertainty.

Both are biased: the amount of noise is underestimated or overestimated.

Proposed uncertainty training paradigm [postdoc Ozerov]:

����������������

������

�����������������

��������

����������������������

����������������

�����������������

������

E. Vincent () HDR 23/11/2012 24 / 31

Page 40: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio content description

Uncertainty training (2)

Derived an EM algorithm that optimizes the uncertainty decodingobjective on noisy training data by alternatingly:

estimating 1st and 2nd order moments of the underlying clean data,

updating the model parameters given these moments.

% correct on the same robust speaker identification benchmark:

Enhanced Training Decoding Training conditionsignal approach approach Clean Matched Unmatched Multi

No Conventional Conventional 65.17 71.81 69.34 84.09

Yes Conventional Conventional 55.22 82.11 80.91 90.12Yes Conventional UncertaintyYes Uncertainty Uncertainty

E. Vincent () HDR 23/11/2012 25 / 31

Page 41: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio content description

Uncertainty training (2)

Derived an EM algorithm that optimizes the uncertainty decodingobjective on noisy training data by alternatingly:

estimating 1st and 2nd order moments of the underlying clean data,

updating the model parameters given these moments.

% correct on the same robust speaker identification benchmark:

Enhanced Training Decoding Training conditionsignal approach approach Clean Matched Unmatched Multi

No Conventional Conventional 65.17 71.81 69.34 84.09

Yes Conventional Conventional 55.22 82.11 80.91 90.12Yes Conventional Uncertainty 75.51 78.60 77.58 85.02Yes Uncertainty Uncertainty

E. Vincent () HDR 23/11/2012 25 / 31

Page 42: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio content description

Uncertainty training (2)

Derived an EM algorithm that optimizes the uncertainty decodingobjective on noisy training data by alternatingly:

estimating 1st and 2nd order moments of the underlying clean data,

updating the model parameters given these moments.

% correct on the same robust speaker identification benchmark:

Enhanced Training Decoding Training conditionsignal approach approach Clean Matched Unmatched Multi

No Conventional Conventional 65.17 71.81 69.34 84.09

Yes Conventional Conventional 55.22 82.11 80.91 90.12Yes Conventional Uncertainty 75.51 78.60 77.58 85.02Yes Uncertainty Uncertainty 75.51 82.87 81.52 91.13

Best results when using both uncertainty decoding and training. Workseven for unmatched training data!

Also applied to singer identification [Lagrange et al., 2012].E. Vincent () HDR 23/11/2012 25 / 31

Page 43: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Audio content description

Music language modeling: an exploratory study

Language modeling is needed tobridge the semantic gap.

Except a few studies [Raphael,Ryynanen, Mauch. . . ], this issuehas been overlooked in music.

Managed Inria EA VERSAMUSproject with U. Tokyo.

Roadmap[Vincent, 2010]

Multiple dependencies[postdoc Raczynski]

Semiotic structure[Bimbot, PhD Sargent]

O

T1 T2 T3

S1 S2 S3 S4 S5 S6 S7

E1 E2 E3 E4 E5

A1 A2 A3

Overall featuresO Tags

Temporal featuresT1 StructureT2 MeterT3 Rhythm

Symbolic featuresS1 Notated tempoS2 Notated loudnessS3 Key/modeS4 HarmonyS5 InstrumentationS6 LyricsS7 Quantized notes

Expressive featuresE1 Expressive tempoE2 Expressive loudnessE3 Instrumental timbreE4 Expressive notesE5 Rendering

Acoustic featuresA1 TracksA2 MixA3 Low-level features

E. Vincent () HDR 23/11/2012 26 / 31

Page 44: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Research directions

Part 1. Audio source separation

Part 2. Audio content description

Part 3. Research directions

E. Vincent () HDR 23/11/2012 27 / 31

Page 45: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Research directions

Research directions in source separationAudio source separation has become a mature topic which is now at thestage of applied research and technology transfer.

Some remaining challenges:

Benefit from the advantages of both time-domain and Gaussianmodels⇒ unified framework accounting for phase in Gaussian models

Overcome local optima of the EM algorithms⇒ advanced Bayesian inference (structured VB, ensemble models. . . )

Address automatic model selection⇒ Bayesian model selection

Deploy real-world applications⇒ exploit extra information, e.g., source repetitions [PhD Souviraa].

E. Vincent () HDR 23/11/2012 28 / 31

Page 46: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Research directions

Research directions in content description

The uncertainty propagation paradigm is still emerging and lies at thefrontier of exploratory and applied research.

Some remaining challenges:

Obtain more accurate and robust uncertainty estimates⇒ finer Bayesian approximations (structured VB. . . ) [PhD Tran]

Provide feedback from speech/speaker recognition to sourceseparation⇒ constraining spectral envelopes in our flexible spectral model

Reduce the semantic gap in music processing⇒ take the opportunity of the move to PAROLE to exploit and adaptsuccessful approaches in natural language processing [PhD Mesnil].

E. Vincent () HDR 23/11/2012 29 / 31

Page 47: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Research directions

Conclusion

Mix of short-term and long-term research united by the use anddevelopment of a Bayesian modeling and inference framework.

Application focus in PAROLE: speech enhancement and robust speechrecognition.

Many more potential applications, including:

high-fidelity hearing aids and mobile communications,

voice applications, multimedia document indexing, music search,

3D audio rendering, repurposing, interactive applications. . .

Ultimate vision: enhance, understand and interact with complex audiodata in a seamless fashion.

E. Vincent () HDR 23/11/2012 30 / 31

Page 48: Contributions to audio source separation and content ...videos.rennes.inria.fr/hdr/emmanuel-vincent/HDRdefense.pdf · Contributions to audio source separation and content description

Research directions

Many thanks to. . .

Kamil Adiloglu Mathieu LagrangeShoko Araki Stephanie Lemaile

Roland Badeau Pierre LeveauJon Barker Jonathan Le Roux

Alexis Benichoux Dimitris MoreauNancy Bertin Andrew Nesbit

Frederic Bimbot Alexey OzerovCharles Blandin Nobutaka Ono

Ngoc Duong Mark PlumbleyValentin Emiya Stanis law RaczynskiRemi Gribonval Gabriel SargentNobutaka Ito Laurent SimonMaria Jafari Joachim Thiemann

Matthieu Kowalski and many others. . .

E. Vincent () HDR 23/11/2012 31 / 31