NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf ·...

64
The Nonnegative Matrix Factorization a tutorial Barbara Ball Atina Brooks Amy Langville [email protected] [email protected] [email protected] C. of Charleston N.C. State U. C. of Charleston Mathematics Dept. Statistics Dept. Mathematics Dept. NISS NMF Workshop February 23–24, 2007

Transcript of NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf ·...

Page 1: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

The

Nonnegative Matrix Factorizationa tutorial

Barbara Ball Atina Brooks Amy [email protected] [email protected] [email protected]

C. of Charleston N.C. State U. C. of Charleston

Mathematics Dept. Statistics Dept. Mathematics Dept.

NISS NMF Workshop

February 23–24, 2007

Page 2: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Outline• Two Factorizations:

— Singular Value Decomposition

— Nonnegative Matrix Factorization

• Why factor anyway?

• Computing the NMF

— Early Algorithms

— Recent Algorithms

• Extensions of NMF

Page 3: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Data MatrixAm×n with rank r

Examplesterm-by-document matrix feature-by-item matrix

pixel intensity-by-image matrix user-by-purchase matrix

gene-by-DNA microarray matrix terrorist-by-action matrix

SVD

A = UΣ VT =∑r

i=1 σiuivTi

Page 4: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

7 of 30

What is the SVD?

Page 5: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

8 of 30

The SVD

decreasing importance

Page 6: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Data MatrixAm×n with rank r

Examplesterm-by-document matrix feature-by-item matrix

pixel intensity-by-image matrix user-by-purchase matrix

gene-by-DNA microarray matrix terrorist-by-action matrix

SVD

A = UΣ VT =∑r

i=1 σiuivTi

Low Rank Approximation

use Ak =∑k

i=1 σiuivTi in place of A

Page 7: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

10 of 30

SVD Rank Reduction

Page 8: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Why use Low Rank Approximation?

• Data Compression and Storage when k << r

• Remove noise and uncertainty

⇒ improved performance on data mining task of retrieval(e.g., find similar items)

⇒ improved performance on data mining task of clustering

Page 9: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Properties of SVD

• basis vectors ui and vi are orthogonal

• uij, vij are mixed in signAk = Uk Σk VT

knonneg mixed nonneg mixed

• U, V are dense

• uniqueness—while there are many SVD algorithms, they allcreate the same (truncated) factorization

• optimality—of all rank-k approximations, Ak is optimal

‖A − Ak‖F = minrank(B)≤k ‖A − B‖F

Page 10: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Summary of Truncated SVDStrengths

• using Ak in place of A gives improved performance

• noise reduction isolates essential components of matrix

• best rank-k approximation

• Ak is unique

Weaknesses

• storage—Uk and Vk are usually completely dense

• interpretation of basis vectors is difficult due to mixed signs

• good truncation point k is hard to determine

0 20 40 60 80 100 1200

1

2

3

4

5

6

7

8

sigma

k=28

• orthogonality restriction

Page 11: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Other Low-Rank Approximations

• QR decomposition

• any URVT factorization

• Semidiscrete decomposition (SDD)

Ak = XkDkYTk , where Dk is diagonal, and elements of Xk, Yk ∈ {−1,0,1}.

• CUR factorization

Page 12: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Other Low-Rank Approximations

• QR decomposition

• any URVT factorization

• Semidiscrete decomposition (SDD)

Ak = XkDkYTk , where Dk is diagonal, and elements of Xk, Yk ∈ {−1,0,1}.

• CUR factorization

BUT

All create basis vectors that are mixed in sign.Negative elements make interpretation difficult.

Page 13: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Other Low-Rank Approximations

• QR decomposition

• any URVT factorization

• Semidiscrete decomposition (SDD)

Ak = XkDkYTk , where Dk is diagonal, and elements of Xk, Yk ∈ {−1,0,1}.

• CUR factorization

BUT

All create basis vectors that are mixed in sign.Negative elements make interpretation difficult.

⇒ Nonnegative Matrix Factorization

Page 14: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Nonnegative Matrix Factorization (2000)

Daniel Lee and Sebastian Seung’s Nonnegative Matrix Factorization

Idea: use low-rank approximation with nonnegative factors to improve weaknesses of trun-SVD

SVD Ak = Uk Σk VTk

mixed nonneg mixed

NMF Ak = Wk Hk

nonneg nonneg nonneg

Page 15: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Interpretation with NMF

• columns of W are the underlying basis vectors, i.e., each of then columns of A can be built from k columns of W.

• columns of H give the weights associated with each basis vector.

Ake1 = WkH∗1 =

⎡⎢⎣

...w1...

⎤⎥⎦h11 +

⎡⎢⎣

...w2...

⎤⎥⎦h21 + . . . +

⎡⎢⎣

...wk...

⎤⎥⎦hk1

• Wk, Hk ≥ 0 ⇒ immediate interpretation (additive parts-based rep.)

Page 16: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Image Mining

SVD

W H

Ai

i

U ViΣ

Page 17: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Image Mining Applications

Original Image Reconstructed Images r = 400 k = 100

• Data compression

• Find similar images

• Cluster images

Page 18: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Text MiningMED dataset (k = 10)

0 1 2 3 4

10987654321 ventricular

aorticseptalleftdefectregurgitationventriclevalvecardiacpressure

Highest Weighted Terms in Basis Vector W*1

weight

term

0 0.5 1 1.5 2 2.5

10987654321 oxygen

flowpressurebloodcerebralhypothermiafluidvenousarterialperfusion

Highest Weighted Terms in Basis Vector W*2

weight

term

0 1 2 3 4

10987654321 children

childautisticspeechgroupearlyvisualanxietyemotionalautism

Highest Weighted Terms in Basis Vector W*5

weight

term

0 0.5 1 1.5 2 2.5

10987654321 kidney

marrowdnacellsnephrectomyunilaterallymphocytesbonethymidinerats

Highest Weighted Terms in Basis Vector W*6

weight

term

Page 19: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Text Mining

polysem

• polysems broken across several basis vectors wi

Page 20: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Text Mining Applications

• Data compression WkHk

• Find similar terms 0 ≤ cos(θ) = WkHkq ≤ 1

• Find similar documents 0 ≤ cos(θ) = qTWkHk ≤ 1

• Cluster documents

Page 21: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Clustering with the NMF

Clustering Terms

• use rows of Wm×k =

⎛⎜⎝

cl.1 cl.2 .. . cl.kterm1 .9 0 .. . .3term2 .1 .8 .. . .2...

......

. . ....

⎞⎟⎠

Clustering Documents

• use cols of Hk×n =

⎛⎜⎝

doc1 doc2 .. . docncl.1 .4 0 .. . .5...

......

. . ....

cl.k 0 .8 .. . .2

⎞⎟⎠

soft clustering is very natural

Page 22: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

The Enron Email Dataset (SAS)

• PRIVATE email collection of 150 Enron employees during 2001

• 92,000 terms and 65,000 messages

• Term-by-Message Matrix

⎛⎜⎜⎜⎝

fastow1 fastow2 skilling1 .. ....

......

... . . .

subpoena 2 0 1 .. .

dynegy 0 3 0 .. ....

......

... . . .

⎞⎟⎟⎟⎠

Page 23: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix
Page 24: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix
Page 25: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Text Mining Applications

• Data compression WkHk

• Find similar terms 0 ≤ cos(θ) = WkHkq ≤ 1

• Find similar documents 0 ≤ cos(θ) = qTWkHk ≤ 1

• Cluster documents

• Topic detection and tracking

Page 26: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Text Mining ApplicationsEnron email messages 2001

Page 27: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Recommendation Systems

purchase

history

matrix

A =

⎛⎜⎜⎜⎝

User 1 User 2 . . . User n

Item 1 1 5 .. . 0Item 2 0 0 .. . 1...

......

. . ....

Item m 0 1 .. . 2

⎞⎟⎟⎟⎠

• Create profiles for classes of users from basis vectors wi

• Find similar users

• Find similar items

Page 28: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Microarray StudyKim and Tidor, 2003

• 300 experiments with 5436 S. cerevisiaegenes• expression for a gene described by theexpression in experiment divided by controlexperiment of wild type under typicalconditions• basis vector represented by an experiment,containing a relative expression for eachgene and its related feature

Page 29: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Functional Relationships

Page 30: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

More Functional Relationships

Page 31: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Comparative Study

Page 32: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Metagenes StudyBrunet et al 2004

Data: gene expressions x samples

Page 33: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Leukemia Samples

Page 34: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Samples from MedulloblastomaTumors

Page 35: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

CNS Embryonal Tumors

Page 36: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

When Does Non-Negative MatrixFactorization Give a CorrectDecomposition into Parts?

Donoho and Stodden, 2003

• Set of weighted generators, non-negative• Each combination separablefrom other combinations• All combinations represented indataset

Page 37: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Example of a Separable FactorialArticulation Family

Page 38: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Properties of NMF

• basis vectors wi are not ⊥ ⇒ can have overlap of topics

• can restrict W, H to be sparse

• Wk, Hk ≥ 0 ⇒ immediate interpretation (additive parts-based rep.)

EX: large wij’s ⇒ basis vector wi is mostly about terms j

EX: hi1 how much doc1 is pointing in the “direction” of topicvector wi

Ake1 = WkH∗1 =

⎡⎢⎣

...w1...

⎤⎥⎦h11 +

⎡⎢⎣

...w2...

⎤⎥⎦h21 + . . . +

⎡⎢⎣

...wk...

⎤⎥⎦hk1

• NMF is algorithm-dependent: W, H not unique

Page 39: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Report Card for SVD and NMF

Subject SVD NMF

low rank approximation improves performance on data mining tasks A Anoise reduction isolates essential components of matrix A Aquality of low rank approximation, ‖A − Ak‖ A+ B+

uniqueness of low rank approximation A+ Bstorage of low rank factors D Ainterpretation of vectors in low rank factors C Achoosing truncation point k C Corthogonality restriction on vectors in low rank factors B Aupdating the factors in the factorization B Cdowndating the factors in the factorization A C

Page 40: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Computation of NMF(Lee and Seung 2000)

Mean squared error objective function

min ‖A − WH‖2F

s.t. W, H ≥ 0

Nonlinear Optimization Problem

— convex in W or H, but not both ⇒ tough to get global min

— huge # unknowns: mk for W and kn for H(EX: A70K×10K and k=10 topics ⇒ 800K unknowns)

— above objective is one of many possible

Page 41: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Other Objective FunctionsDivergence objective function

min∑i,j

(Aij logAij

[WH]ij− Aij + [WH]ij)

Weighted Mean Squared Error objective function

min ‖B. ∗ (A − WH)‖2F

Weighted Divergence objective function

min∑i,j

Bij. ∗ (Aij logAij

[WH]ij− Aij + [WH]ij)

Bregman Divergence Class of objective functions

(coming tomorrow–Inderjit Dhillon)

Suite of Other Divergence objective functions

(NMFLAB–Cichocki)

Page 42: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

22

Table 1. Amari Alpha-NMF algorithms

Amari alpha divergence: D(α)A (yik||zik) =

Xik

yαikz1−α

ik − αyik + (α− 1)zik

α(α− 1)

Algorithm: xjk ←

xjk

�Pmi=1 aij (

yik

[A X]ik)α

�ωXα

!1+αsX

aij ←

aij

�PNk=1 xjk (

yik

[A X]ik)α

�ωAα

!1+αsA

aij ← aij/P

p apj ,

0 < ωX < 2, 0 < ωA < 2

Pearson distance: (α = 2): D(α=2)A (yik||zik) =

Xik

(yik − [AX]ik)2

[AX]ik,

Algorithm: xjk ←

xjk

�Pmi=1 aij (

yik

[A X]ik)2�ωX

2!1+αsX

aij ←

aij

�PNk=1 xjk (

yik

[A X]ik)2�ωA

2!1+αsA

aij ← aij/P

p apj ,

0 < ωX < 2, 0 < ωA < 2

Hellinger distance: (α = 12): D

(α=0.5)A (yik||zik) =

Xik

(yik − [AX]ik)2

[AX]ik,

Algorithm: xjk ←

xjk

�Pmi=1 aij

ryik

[A X]ik

�2ωX!1+αsX

aij ←

aij

�PNk=1 xjk

ryik

[A X]ik

�2ωA!1+αsA

aij ← aij/P

p apj ,

0 < ωX < 2, 0 < ωA < 2

Page 43: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

23

Table 2. Amari Alpha-NMF algorithms (continued)

Kullback-Leibler divergence: (α → 1):

limα→1 D(α)A (yik||zik) =

Xik

yik logyik

[AX]ik− yik + [AX]ik,

Algorithm: xjk ←

xjk

mX

i=1

aijyik

[A X]ik

!ωX!1+αsX

aij ←

aij

NX

k=1

xjkyik

[A X] ik

!ωA!1+αsA

aij ← aij/P

p apj

0 < ωX < 2, 0 < ωA < 2

Dual Kullback-Leibler divergence: (α → 0):

limα→0 D(α)A (yik||zik) =

Xik

[AX]ik log[AX]ik

yik+ yik − [AX]ik

Algorithm: xjk ←

xjk

mYi=1

�yik

[AX]ik

�ωX aij!1+αsX

aij ←

aij

NYk=1

�yik

[AX]ik

�η̃jxjk!1+αsA

aij ← aij/P

p apj

0 < ωX < 2, 0 < ωA < 2

Page 44: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

24

Table 3. Other generalized NMF algorithms

Beta generalized divergence:

D(β)K (yik||zik) =

Xik

yikyβ−1

ik − [AX]β−1ik

β(β − 1)+ [AX]β−1

ik

[AX]ik − yik

β

Kompass algorithm:

xjk ← xjk

Pmi=1 aij (yik/[AX]2−β

ik )Pmi=1 aij [AX]β−1

ik + ε

aij ←

aij

PNk=1 xjk (yik/[AX]2−β

ik )PNk=1 xjk [AX]β−1

ik + ε

!1+αsA

aij ← aij/P

p apj ,

Triangular discrimination:

D(β)T (yik||zik) =

Xik

yβikz1−β

ik − βyik + (β − 1)zik

β(β − 1)

Algorithm:

xjk ←�

xjk

�Pmi=1 aij (

2yik

yik + [A X]ik)2�ωX

�1+αsX

aij ←�

aij

�PNk=1 xjk (

2yik

yik + [A X]ik)2�ωA

�1+αsA

aij ← aij/P

p apj , 0 < ωX < 2, 0 < ωA < 2

Itakura-Saito distance:

DIS(yik||zik) =Xik

yik

zik− log

�yik

zik

�− 1

Algorithm:

X ← X ¯ [(AT P ) ® (AT Q + ε)].β

A ← A ¯ [(PXT ) ® (QXT + ε)].β

aij ← aij/P

p apj , β = [0.5, 1]

P = Y ® (AX + ε).2, Q = 1® (AX + ε)

Page 45: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

26

Table 4. Generalized SMART NMF adaptive algorithms and corresponding loss func-tions - part I.

Generalized SMART algorithms

aij ← aij exp

NX

k=1

η̃j xjk ρ(yik, zik)

!, xjk ← xjk exp

mX

i=1

ηj aij ρ(yik, zik)

!,

aj =Pm

i=1 aij = 1, ∀j, aij ≥ 0, yik > 0, zik = [AX]ik > 0, xjk ≥ 0

Divergence: D(Y ||A X) Error function: ρ(yik, zik)

Dual Kullback-Leibler I-divergence: DKL2(AX||Y )Xik

�zik ln

zik

yik+ yik − zik

�, ρ(yik, zik) = ln

�yik

zik

�,

Relative Arithmetic-Geometric divergence: DRAG(Y ||AX)Xik

�(yik + zik) ln

�yik + zik

2yik

�+ yik − zik

�, ρ(yik, zik) = ln

�2yik

yik + zik

�,

Symmetric Arithmetic-Geometric divergence: DSAG(Y ||AX)

2Xik

�yik + zik

2ln

�yik + zik

2√

yikzik

��, ρ(yik, zik) =

yik − zik

2zik+ ln

�2√

yikzik

yik + zik

�,

J-divergence: DJ(Y ||AX)Xik

�yik − zik

2ln

�yik

zik

��, ρ(yik, zik) =

1

2ln

�yik

zik

�+

yik − zik

2zik,

Page 46: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

27

Table 5. Generalized SMART NMF adaptive algorithms and corresponding loss func-tions - part II.

Relative Jensen-Shannon divergence: DRJS(Y ||AX)Xik

�2yik ln

�2yik

yik + zik

�+ zik − yik

�, ρ(yik, zik) =

yik − zik

2zik+ ln

�2√

yikzik

yik + zik

�,

Dual Jensen-Shannon divergence: DDJS(Y ||AX)Xik

yik ln

�2zik

zik + yik

�+ yik ln

�2yik

zik + yik

�, ρ(yik, zik) = ln

�zik + yik

2yik

�,

Symmetric Jensen-Shannon divergence: DSJS(Y ||AX)Xik

yik ln

�2yik

yik + zik

�+ zik ln

�2zik

yik + zik

�, ρ(yik, zik) = ln

�yik + zik

2zik

�,

Triangular discrimination: DT (Y ||AX)Xik

�(yik − zik)2

yik + zik

�, ρ(yik, zik) =

�2yik

yik + zik

�2

− 1,

Bose-Einstein divergence: DBE(Y ||AX)Xik

yik ln

�(1 + α)yik

yik + αzik

�+ αzik ln

�(1 + α)zik

yik + αzik

�, ρ(yik, zik) = α ln

�yik + αzik

(1 + α)zik

�,

Page 47: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Early NMF Algorithms

• Alternating Least Squares

— Paatero 1994

— ALS algorithms that incorporate sparsity

• Multiplicative update rules

— Lee-Seung 2000

— Hoyer 2002

• Gradient Descent

— Hoyer 2004

— Berry-Plemmons 2004

Page 48: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

PMF Algorithm: Paatero & Tapper 1994Mean Squared Error—Alternating Least Squares

min ‖A − WH‖2F

s.t. W, H ≥ 0

————————————————————————W = abs(randn(m,k));

for i = 1 : maxiter

LS for j = 1 : n = #docs, solve

minH∗j ‖A∗j − WH∗j‖22

s.t. H∗j ≥ 0

LS for j = 1 : m = #terms, solve

minWj∗ ‖Aj∗ − Wj∗H‖22

s.t. Wj∗ ≥ 0

end————————————————————————

Page 49: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

ALS Algorithm

—————————————————————————W = abs(randn(m,k));

for i = 1 : maxiter

LS solve matrix equation WTWH = WTA for H

nonneg H = H. ∗ (H >= 0)

LS solve matrix equation HHTWT = HAT for W

nonneg W = W. ∗ (W >= 0)

end—————————————————————————

Page 50: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

ALS Summary

Pros

+ fast

+ works well in practice

+ speedy convergence

+ only need to initialize W(0)

+ 0 elements not locked

Cons

– no sparsity of W and H incorporated into mathematical setup

– ad hoc nonnegativity: negative elements are set to 0

– ad hoc sparsity: negative elements are set to 0

– no convergence theory

Page 51: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Early NMF Algorithms

• Alternating Least Squares

— Paatero 1994

— ALS algorithms that incorporate sparsity

• Multiplicative update rules

— Lee-Seung 2000

— Hoyer 2002

• Gradient Descent

— Hoyer 2004

— Berry-Plemmons 2004

Page 52: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

NMF Algorithm: Lee and Seung 2000Mean Squared Error objective function

min ‖A − WH‖2F

s.t. W, H ≥ 0

————————————————————————W = abs(randn(m,k));

H = abs(randn(k,n));

for i = 1 : maxiter

H = H .* (WTA) ./ (WTWH + 10−9);

W = W .* (AHT ) ./ (WHHT + 10−9);

end————————————————————————

Many parameters affect performance (k, obj. function, sparsity constraints, algorithm, etc.).

— NMF is not unique!

(proof of convergence to fixed point based on E-M convergence proof)

Page 53: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

NMF Algorithm: Lee and Seung 2000Divergence objective function

min∑i,j

(Aij logAij

[WH]ij− Aij + [WH]ij)

s.t. W, H ≥ 0

————————————————————————W = abs(randn(m,k));

H = abs(randn(k,n));

for i = 1 : maxiter

H = H .* (WT (A ./ (WH + 10−9))) ./ WTeeT ;

W = W .* ((A ./ (WH + 10−9))HT ) ./ eeTHT ;

end————————————————————————

(proof of convergence to fixed point based on E-M convergence proof)

(objective function tails off after 50-100 iterations)

Page 54: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Multiplicative Update Summary

Pros

+ convergence theory: guaranteed to converge to fixed point

+ good initialization W(0), H(0) speeds convergence and gets tobetter fixed point

Cons

– fixed point may be local min or saddle point

– good initialization W(0), H(0) speeds convergence and gets tobetter fixed point

– slow: many M-M multiplications at each iteration

– hundreds/thousands of iterations until convergence

– no sparsity of W and H incorporated into mathematical setup

– 0 elements locked

Page 55: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

22

Table 1. Amari Alpha-NMF algorithms

Amari alpha divergence: D(α)A (yik||zik) =

Xik

yαikz1−α

ik − αyik + (α− 1)zik

α(α− 1)

Algorithm: xjk ←

xjk

�Pmi=1 aij (

yik

[A X]ik)α

�ωXα

!1+αsX

aij ←

aij

�PNk=1 xjk (

yik

[A X]ik)α

�ωAα

!1+αsA

aij ← aij/P

p apj ,

0 < ωX < 2, 0 < ωA < 2

Pearson distance: (α = 2): D(α=2)A (yik||zik) =

Xik

(yik − [AX]ik)2

[AX]ik,

Algorithm: xjk ←

xjk

�Pmi=1 aij (

yik

[A X]ik)2�ωX

2!1+αsX

aij ←

aij

�PNk=1 xjk (

yik

[A X]ik)2�ωA

2!1+αsA

aij ← aij/P

p apj ,

0 < ωX < 2, 0 < ωA < 2

Hellinger distance: (α = 12): D

(α=0.5)A (yik||zik) =

Xik

(yik − [AX]ik)2

[AX]ik,

Algorithm: xjk ←

xjk

�Pmi=1 aij

ryik

[A X]ik

�2ωX!1+αsX

aij ←

aij

�PNk=1 xjk

ryik

[A X]ik

�2ωA!1+αsA

aij ← aij/P

p apj ,

0 < ωX < 2, 0 < ωA < 2

Page 56: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

23

Table 2. Amari Alpha-NMF algorithms (continued)

Kullback-Leibler divergence: (α → 1):

limα→1 D(α)A (yik||zik) =

Xik

yik logyik

[AX]ik− yik + [AX]ik,

Algorithm: xjk ←

xjk

mX

i=1

aijyik

[A X]ik

!ωX!1+αsX

aij ←

aij

NX

k=1

xjkyik

[A X] ik

!ωA!1+αsA

aij ← aij/P

p apj

0 < ωX < 2, 0 < ωA < 2

Dual Kullback-Leibler divergence: (α → 0):

limα→0 D(α)A (yik||zik) =

Xik

[AX]ik log[AX]ik

yik+ yik − [AX]ik

Algorithm: xjk ←

xjk

mYi=1

�yik

[AX]ik

�ωX aij!1+αsX

aij ←

aij

NYk=1

�yik

[AX]ik

�η̃jxjk!1+αsA

aij ← aij/P

p apj

0 < ωX < 2, 0 < ωA < 2

Page 57: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

24

Table 3. Other generalized NMF algorithms

Beta generalized divergence:

D(β)K (yik||zik) =

Xik

yikyβ−1

ik − [AX]β−1ik

β(β − 1)+ [AX]β−1

ik

[AX]ik − yik

β

Kompass algorithm:

xjk ← xjk

Pmi=1 aij (yik/[AX]2−β

ik )Pmi=1 aij [AX]β−1

ik + ε

aij ←

aij

PNk=1 xjk (yik/[AX]2−β

ik )PNk=1 xjk [AX]β−1

ik + ε

!1+αsA

aij ← aij/P

p apj ,

Triangular discrimination:

D(β)T (yik||zik) =

Xik

yβikz1−β

ik − βyik + (β − 1)zik

β(β − 1)

Algorithm:

xjk ←�

xjk

�Pmi=1 aij (

2yik

yik + [A X]ik)2�ωX

�1+αsX

aij ←�

aij

�PNk=1 xjk (

2yik

yik + [A X]ik)2�ωA

�1+αsA

aij ← aij/P

p apj , 0 < ωX < 2, 0 < ωA < 2

Itakura-Saito distance:

DIS(yik||zik) =Xik

yik

zik− log

�yik

zik

�− 1

Algorithm:

X ← X ¯ [(AT P ) ® (AT Q + ε)].β

A ← A ¯ [(PXT ) ® (QXT + ε)].β

aij ← aij/P

p apj , β = [0.5, 1]

P = Y ® (AX + ε).2, Q = 1® (AX + ε)

Page 58: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

26

Table 4. Generalized SMART NMF adaptive algorithms and corresponding loss func-tions - part I.

Generalized SMART algorithms

aij ← aij exp

NX

k=1

η̃j xjk ρ(yik, zik)

!, xjk ← xjk exp

mX

i=1

ηj aij ρ(yik, zik)

!,

aj =Pm

i=1 aij = 1, ∀j, aij ≥ 0, yik > 0, zik = [AX]ik > 0, xjk ≥ 0

Divergence: D(Y ||A X) Error function: ρ(yik, zik)

Dual Kullback-Leibler I-divergence: DKL2(AX||Y )Xik

�zik ln

zik

yik+ yik − zik

�, ρ(yik, zik) = ln

�yik

zik

�,

Relative Arithmetic-Geometric divergence: DRAG(Y ||AX)Xik

�(yik + zik) ln

�yik + zik

2yik

�+ yik − zik

�, ρ(yik, zik) = ln

�2yik

yik + zik

�,

Symmetric Arithmetic-Geometric divergence: DSAG(Y ||AX)

2Xik

�yik + zik

2ln

�yik + zik

2√

yikzik

��, ρ(yik, zik) =

yik − zik

2zik+ ln

�2√

yikzik

yik + zik

�,

J-divergence: DJ(Y ||AX)Xik

�yik − zik

2ln

�yik

zik

��, ρ(yik, zik) =

1

2ln

�yik

zik

�+

yik − zik

2zik,

Page 59: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

27

Table 5. Generalized SMART NMF adaptive algorithms and corresponding loss func-tions - part II.

Relative Jensen-Shannon divergence: DRJS(Y ||AX)Xik

�2yik ln

�2yik

yik + zik

�+ zik − yik

�, ρ(yik, zik) =

yik − zik

2zik+ ln

�2√

yikzik

yik + zik

�,

Dual Jensen-Shannon divergence: DDJS(Y ||AX)Xik

yik ln

�2zik

zik + yik

�+ yik ln

�2yik

zik + yik

�, ρ(yik, zik) = ln

�zik + yik

2yik

�,

Symmetric Jensen-Shannon divergence: DSJS(Y ||AX)Xik

yik ln

�2yik

yik + zik

�+ zik ln

�2zik

yik + zik

�, ρ(yik, zik) = ln

�yik + zik

2zik

�,

Triangular discrimination: DT (Y ||AX)Xik

�(yik − zik)2

yik + zik

�, ρ(yik, zik) =

�2yik

yik + zik

�2

− 1,

Bose-Einstein divergence: DBE(Y ||AX)Xik

yik ln

�(1 + α)yik

yik + αzik

�+ αzik ln

�(1 + α)zik

yik + αzik

�, ρ(yik, zik) = α ln

�yik + αzik

(1 + α)zik

�,

Page 60: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Early NMF Algorithms

• Alternating Least Squares

— Paatero 1994

— ALS algorithms that incorporate sparsity

• Multiplicative update rules

— Lee-Seung 2000

— Hoyer 2002

• Gradient Descent

— Hoyer 2004

— Berry-Plemmons 2004

Page 61: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Sparsity Measures

• Berry et al. ‖x‖22

• Hoyer spar(xn×1) =√

n−‖x‖1/‖x‖2√n−1

• Diversity measure E(p)(x) =∑n

i=1 |xi|p, 0 ≤ p ≤ 1

E(p)(x) = −∑n

i=1 |xi|p, p < 0

Rao and Kreutz-Delgado: algorithms for minimizing E(p)(x)s.t. Ax = b, but expensive iterative procedure

• Ideal nnz(x) not continuous, NP-hard to use this in optim.

Page 62: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

NMF Algorithm: Berry et al. 2004Gradient Descent–Constrained Least Squares

————————————————————————————W = abs(randn(m,k)); (scale cols of W to unit norm)

H = zeros(k,n);

for i = 1 : maxiter

CLS for j = 1 : #docs, solve

minH∗j ‖A∗j − WH∗j‖22 + λ‖H∗j‖2

2

s.t. H∗j ≥ 0

GD W = W .* (AHT ) ./ (WHHT + 10−9); (scale cols of W)

end————————————————————————————

Page 63: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

NMF Algorithm: Berry et al. 2004Gradient Descent–Constrained Least Squares

————————————————————————————W = abs(randn(m,k)); (scale cols of W to unit norm)

H = zeros(k,n);

for i = 1 : maxiter

CLS for j = 1 : #docs, solve

minH∗j ‖A∗j − WH∗j‖22 + λ‖H∗j‖2

2

s.t. H∗j ≥ 0

solve for H: (WTW + λ I) H = WTA; (small matrix solve)

GD W = W .* (AHT ) ./ (WHHT + 10−9); (scale cols of W)

end————————————————————————————

(objective function tails off after 15-30 iterations)

Page 64: NonnegativeMatrixFactorization - College of Charlestonlangvillea.people.cofc.edu/NISS-NMF.pdf · Outline • Two Factorizations: — Singular Value Decomposition — Nonnegative Matrix

Berry et al. 2004 Summary

Pros

+ fast: less work per iteration than most other NMF algorithms

+ fast: small # of iterations until convergence

+ sparsity parameter for H

Cons

– 0 elements in W are locked

– no sparsity parameter for W

– ad hoc nonnegativity: negative elements in H are set to 0,could run lsqnonneg or snnls instead

– no convergence theory