Parallel Local Graph Clustering - MIT CSAIL

25
Parallel Local Graph Clustering Julian Shun Joint work with Farbod Roosta-Khorasani, Kimon Fountoulakis, and Michael W. Mahoney Work appeared in VLDB 2016

Transcript of Parallel Local Graph Clustering - MIT CSAIL

Page 1: Parallel Local Graph Clustering - MIT CSAIL

Parallel Local Graph ClusteringJulian Shun

Joint work with Farbod Roosta-Khorasani, Kimon Fountoulakis, and Michael W. MahoneyWork appeared in VLDB 2016

Page 2: Parallel Local Graph Clustering - MIT CSAIL

Metric for Cluster Quality

A

B

C D

E

G

F H

Conductance =

Number of edges leaving cluster

Sum of degrees of vertices in cluster*

Conductance = 2/(2+2) = 0.5

Conductance = 1/(2+2+3) = 0.14

Low conductance à “better” cluster

*Consider the smaller of the two sides

2

Page 3: Parallel Local Graph Clustering - MIT CSAIL

Clustering Algorithms• Finding minimum conductance cluster is NP-hard• Many approximation algorithms and heuristic algorithms exist• Spectral partitioning, METIS (recursive bisection),

maximum flow-based algorithms, etc.• All algorithms are global, i.e., they need to touch the whole graph at least once requiring at least |V|+|E| work• Can be very expensive for billion-scale graphs

1.4 billion vertices6.6 billion edges

3.5 billion vertices128 billion edges

1.4 billion vertices1 trillion edges

3

Page 4: Parallel Local Graph Clustering - MIT CSAIL

Local Clustering Algorithms• Does work proportional to only the size of the output cluster (can be much less than |V|+|E|)

• Take as input a “seed” set of vertices and find good cluster close to “seed” set

4

Page 5: Parallel Local Graph Clustering - MIT CSAIL

Local Clustering Algorithms• Many meaningful clusters in real-world networks are relatively small [Leskovec et al. 2008, 2010, Jeub et al. 2015]

Some existing local algorithmsSpielman and Teng 2004Andersen, Chung, and Lang 2006Andersen and Peres 2009Gharan and Trevisan 2012Kloster and Gleich 2014Chung and Simpson 2015

All existing local algorithms are sequentialExisting studies are on small to medium graphs

Goal: Develop parallel local clustering algorithms that scale to massive graphs

Network community profile plot(LiveJournal: |V|=4M, |E|=40M)

5

Page 6: Parallel Local Graph Clustering - MIT CSAIL

Parallel Local Algorithms• We present first parallel algorithms for local graph clustering• Nibble [Spielman and Teng 2004]• PageRank-Nibble [Andersen, Chung, and Lang 2006]• Deterministic HeatKernel-PageRank [Kloster and Gleich 2014]• Randomized HeatKernel-PageRank [Chung and Simpson 2015]• Sweep cut

• All local algorithms take various input parameters that affect output cluster• Parallel Method 1: Try many different parameters

independently in parallel• Parallel Method 2: Parallelize algorithm for individual run

• Useful for interactive setting where data analyst tweaks parameters based on previous results

6

Page 7: Parallel Local Graph Clustering - MIT CSAIL

PageRank-Nibble [Andersen, Chung, and Lang 2006]

• Input: seed vertex s, error ε, teleportation α

• Maintain approximate PageRank vector p and

residual vector r (represented sparsely with hash

table for local running time)

• Initialize p = {} (contains 0.0 everywhere implicitly)

r = {(s,1.0)} (contains 0.0 everywhere except s)

• While(any vertex u satisfies r[u]/deg(u) ≥ ε)

1) Choose any vertex u where r[u]/deg(u) ≥ ε

2) p[u] = p[u] + αr[u]

3) For all neighbors ngh of u: r[ngh] = r[ngh] + (1-α)r[u]/(2deg(u))

4) r[u] = (1-α)r[u]/2

• Apply sweep cut rounding on p to obtain cluster

Note: |p|1 + |r|1 = 1.0 (i.e., is a probability distribution)

Algorithm Idea

Iteratively spread probability mass around the graph

7

Page 8: Parallel Local Graph Clustering - MIT CSAIL

PR-Nibble

A

B

C D

E

G

F H

p = 0.0r = 1.0

Seed vertex = A ε = 0.1 α = 0.1

p = 0.0r = 0.0

p = 0.0r = 0.0

p = 0.0r = 0.0

p = 0.0r = 0.0

p = 0.0r = 0.0

p = 0.0r = 0.0

p = 0.0r = 0.0

While(any vertex u satisfies r[u]/deg(u) ≥ ε)1) Choose any vertex u where r[u]/deg(u) ≥ ε2) p[u] = p[u] + αr[u]3) For all neighbors ngh of u: r[ngh] = r[ngh] + (1-α)r[u]/(2deg(u))4) r[u] = (1-α)r[u]/2

r[A]/deg(A) = 1/2 ≥ ε

p = 0.1r = 0.45

p = 0.0r = 0.225

p = 0.0r = 0.225

r[B]/deg(B) = 0.225/2 ≥ ε

p = 0.0225r = 0.101

p = 0.0r = 0.276

p = 0.1r = 0.501

Work is proportional to number of nonzero entries and their edges

8

Page 9: Parallel Local Graph Clustering - MIT CSAIL

Parallel PR-Nibble• Input: seed vertex s, error ε, teleportation α• Maintain approximate PageRank vector p and residual vector r (length equal to # vertices)

• Initialize p = {0.0,…,0.0}, r = {0.0,…,0.0} and r[s] = 1.0

• While(any vertex u satisfies r[u]/deg(u) ≥ ε)1. Choose any vertex u where r[u]/deg(u) ≥ ε2. p[u] = p[u] + αr[u]3. For all neighbors ngh of u: r[ngh] = r[ngh] + (1-α)r[u]/(2deg(u))4. r[u] = (1-α)r[u]/2

• Apply sweep cut rounding on p to obtain cluster

ALL

Using some stale information—is that a problem?

9

Page 10: Parallel Local Graph Clustering - MIT CSAIL

Parallel PR-Nibble• We prove that asymptotic work remains the same as the sequential version, O(1/(αε))

• Guarantee on cluster quality is also maintained• Parallel implementation:

• Use fetch-and-add to deal with conflicts• Concurrent hash table to represent sparse

sets (for local running time)• Use the Ligra graph processing framework

[Shun and Blelloch 2013] to process only the “active” vertices and their edges (for local running time)

Memory location

Proc1Proc2 Proc3

+1 +3 +8

10

Page 11: Parallel Local Graph Clustering - MIT CSAIL

Ligra Graph Processing Framework11

EdgeMapVertexMapVertexSubset

0 4 68VertexSubset

4

7

52

1

0

6

8

3

f(v){data[v] = data[v] + 1;

}

4

0

6

8

VertexMap

Page 12: Parallel Local Graph Clustering - MIT CSAIL

Ligra Graph Processing Framework12

EdgeMapVertexMapVertexSubset

4

7

52

1

0

6

8

3

0 4 68VertexSubset

update(u,v){…}

4

0

6

8

EdgeMap

Page 13: Parallel Local Graph Clustering - MIT CSAIL

Parallel PR-Nibble in Ligra

sparseSet p = {}, sparseSet r = {}, sparseSet r’ = {}; //concurrent hash tables

procedure UpdateNgh(s, d):

atomicAdd(r’[d], (1-α)r[s]/(2*deg(s));

procedure UpdateSelf(u):

p[u] = p[u] + αr[u]; r’[u] = (1-α)r[u]/2;

procedure PR-Nibble(G, seed, α, ε):

r = {(seed, 1.0)};

while (true):

active = { u | r[u]/deg(u) ≥ ε}; //vertexSubsetif active is empty, then break;

VertexMap(active, UpdateSelf);EdgeMap(G, active, UpdateNgh);

r = r’; //swap roles for next iterationreturn p;

13

While(any vertex u satisfies r[u]/deg(u) ≥ ε)1) For all vertices u where r[u]/deg(u) ≥ ε:

a) p[u] = p[u] + αr[u]b) For all neighbors ngh of u: r[ngh] = r[ngh] + (1-α)r[u]/(2deg(u))c) r[u] = (1-α)r[u]/2

Work is only done on “active” vertices and its outgoing edges

Page 14: Parallel Local Graph Clustering - MIT CSAIL

Performance of Parallel PR-Nibble

14

Page 15: Parallel Local Graph Clustering - MIT CSAIL

Parallel PR-Nibble in Practice

00.20.40.60.8

11.21.41.61.8

soc-L

J

Patents

com-LJ

Orkut

Twitter

Friend

ster

Yahoo

Num. vertices processed (normalized)

SequentialParallel

0.00001

0.0001

0.001

0.01

0.1

1

soc-L

J

Patents

com-LJ

Orkut

Twitter

Friend

ster

Yahoo

Num. iterations (normalized)

• Amount of work slightly higher than sequential• Number of iterations until termination is much lower!

15

Page 16: Parallel Local Graph Clustering - MIT CSAIL

PR-Nibble Optimization16

• While(any vertex u satisfies r[u]/deg(u) ≥ ε)1. Choose ALL vertices u where r[u]/deg(u) ≥ ε2. p[u] = p[u] + αr[u]3. For all neighbors ngh of u: r[ngh] = r[ngh] + (1-α)r[u]/(2deg(u))4. r[u] = (1-α)r[u]/2

• While(any vertex u satisfies r[u]/deg(u) ≥ ε)1. Choose ALL vertices u where r[u]/deg(u) ≥ ε2. p[u] = p[u] + (2α/(1+α))r[u]3. For all neighbors ngh of u: r[ngh] = r[ngh] + ((1-α)/(1+α))r[u]/deg(u)4. r[u] = 0

• Gives the same conductance and asymptotic work guarantees as the original algorithm

0

0.2

0.4

0.6

0.8

1

1.2

soc-

LJ

cit-P

aten

ts

com

-LJ

com

-Ork

ut

nlpk

kt24

0

Twitt

er

com

-frie

ndst

er

Yah

oo

rand

Loca

l

3D-g

rid

Nor

mal

ized

runn

ing

time original PR-Nibble optimized PR-Nibble

Page 17: Parallel Local Graph Clustering - MIT CSAIL

Parallel PR-Nibble Performance17

0

5

10

15

20

25

soc-L

J

Patents

com-LJ

Orkut

Twitter

Friend

ster

Yahoo

Self-relative speedup on 40 cores

• 10—22x self-relative speedup on 40 cores• Speedup limited by small active set in some iterations and memory effects

• Running times are in seconds to sub-seconds

0

5

10

15

20

25

soc-L

J

Patents

com-LJ

Orkut

Twitter

Friend

ster

Yahoo

Speedup on 40 cores relative to our optimized sequential code

Page 18: Parallel Local Graph Clustering - MIT CSAIL

Sweep Cut Rounding

18

Page 19: Parallel Local Graph Clustering - MIT CSAIL

Sweep Cut Rounding Procedure• What to do with the p vector?• Sweep cut rounding procedure:

• Sort vertices by non-increasing value of p[v]/deg(v) (for non-zero entries in p)

• Look at all possible prefixes of sorted order and choose the cut with lowest conductance

A

B

C D

E

G

F H

Example

Sorted order = {A, B, C, D}

Cluster{A}

{A,B}{A,B,C}

{A,B,C,D}

Conductance = num. edges leaving cluster / sum of degrees in cluster

IConductance2/2 = 12/4 = 0.51/7 = 0.143/11 = 0.27

19

Page 20: Parallel Local Graph Clustering - MIT CSAIL

Sweep Cut Algorithm20

• Sort vertices by non-increasing value of p[v]/deg(v) (for non-zero entries in p)• O(N log N) work, where N is number of non-zeros in p

• Look at all possible prefixes of sorted order and choose the cut with lowest conductance• Naively takes O(N vol(N)) work, where vol(N) = sum of

degrees of non-zero vertices in p• O(vol(N)) work algorithm: vol = 0

crossingEdges = 0S = {}for each vertex v in sorted order:

S = S U {v}vol += deg(v)for each ngh of v:

if ngh in S: crossingEdges--else: crossingEdges++

conductance(S) = crossingEdges/vol

A

B

C

DS = {A}vol = 2crossingEdges = 2conductance({A}) = 2/2S = {A,B}vol += 2 à vol = 4crossingEdges--;crossingEdges++; conductance({A,B}) = 2/4

B

N << # vertices in graph

Page 21: Parallel Local Graph Clustering - MIT CSAIL

Parallel Sweep Cut21

• Sort vertices by non-increasing value of p[v]/deg(v) (for non-zero entries in p)• O(N log N) work and O(log N) depth (parallel time), where

N is number of non-zeros in p• Look at all possible prefixes of sorted order and choose the cut with lowest conductance• Naively takes O(N vol(N)) work, where vol(N) = sum of

degrees of non-zero vertices in p• This version is easily parallelizable by considering all cuts

independently• What about parallelizing O(vol(N)) work algorithm?

N << # vertices in graph

Page 22: Parallel Local Graph Clustering - MIT CSAIL

O(vol(N)) work parallel algorithm22

A

BC D

E

GF H

Cluster{A}

{A,B}{A,B,C}

{A,B,C,D}

Conductance = num. edges leaving cluster / sum of degrees in cluster

IConductance2/2 = 12/4 = 0.51/7 = 0.143/11 = 0.27

Sorted order = {A, B, C, D}

• Each vertex has rank in sorted order: • rank(A) = 1, rank(B) = 2, rank(C) = 3, rank(D) = 4, rank(anything else) = 5

• For each incident edge (x, y) of vertex in sorted order, where rank(x) < rank(y), create pairs (1, rank(x)) and (-1, rank(y))

[(1,1), (-1,2), (1,1), (-1,3), (1,2), (-1,3), (1,3), (-1,4), (1,4), (-1,5), (1,4), (-1,5), (1,4), (-1,5)]A B A C B C C D D E D F D G

• Sort pairs by second value[(1,1), (1,1), (-1,2), (1,2), (-1,3), (-1,3), (1,3), (-1,4), (1,4), (1,4), (1,4), (-1,5), (-1,5), (-1,5)]

• Prefix sum on first value[(1,1), (2,1), (1,2), (2,2), (1,3), (0,3), (1,3), (0,4), (1,4), (2,4), (3,4), (2,5), (1,5), (0,5)]

• Get denominator of conductance with prefix sum over degrees

Hash table insertions: O(N) work and O(log N) depth

Scan over edges: O(vol(N)) work and O(log vol(N)) depth

Prefix sum: O(vol(N)) work and O(log vol(N)) depth

Integer sort: O(vol(N)) work and O(log vol(N)) depth

If x and y both in prefix, 1 and -1 cancel out in prefix sumIf x in prefix and y is not, only 1 contributes to prefix sumIf neither x nor y in prefix, no contribution

x y

Page 23: Parallel Local Graph Clustering - MIT CSAIL

Sweep Cut Performance

0

5

10

15

20

25

30

soc-L

J

Patents

com-LJ

Orkut

Twitter

Friend

ster

Yahoo

Self-relative speedup on 40 cores

0

5

10

15

20

25

30

soc-L

J

Patents

com-LJ

Orkut

Twitter

Friend

ster

Yahoo

Speedup on 40 cores relative to sequential

• 23—28x speedup on 40 cores• About a 2x overhead from sequential to parallel• Outperforms sequential with 4 or more cores

23

1

10

100

1000

1 2 4 8 16 24 32 40

Ru

nn

in

g tim

e (seco

nd

s)

Number of cores

parallel sweep

sequential sweep

Page 24: Parallel Local Graph Clustering - MIT CSAIL

Network Community Profile Plots

10

-3

10

-2

10

-1

10

0

10

0

10

1

10

2

10

3

10

4

10

5

Co

nd

uctan

ce

Cluster size

10-3

10-2

10-1

100

100 101 102 103 104 105

Con

duct

ance

Cluster size

42M vertices

1.2B edges

125M vertices

1.8B edges

• Use parallel algorithms to generate plots for large graphs

• Agrees with conclusions of [Leskovec et al. 2008, 2010, Jeub et al. 2015] that good clusters tend to be relatively small

24

Page 25: Parallel Local Graph Clustering - MIT CSAIL

Summary of our Parallel Algorithms• Sweep cut• PageRank-Nibble [Andersen, Chung, and Lang 2006]• Nibble [Spielman and Teng 2004]• Deterministic HeatKernel-PageRank [Kloster and Gleich

2014]• Randomized HeatKernel-PageRank [Chung and Simpson

2015]

• Based on iteratively processing sets of “active” vertices in parallel

• Use concurrent hash tables and Ligra’s functionality to get local running times

• We prove theoretically that parallel work asymptotically matches sequential work, and obtain low depth (parallel time) complexity

25