Spectrally Thin Trees
description
Transcript of Spectrally Thin Trees
![Page 1: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/1.jpg)
Spectrally Thin Trees
Nick Harvey University of British Columbia
Joint work with Neil Olver (MIT Vrije Universiteit)
![Page 2: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/2.jpg)
Approximating Dense Objectsby Sparse Objects
Floor joists
Wood Joists Engineered Joists
![Page 3: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/3.jpg)
Approximating Dense Objectsby Sparse Objects
Bridges
Masonry Arch Truss Arch
![Page 4: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/4.jpg)
Approximating Dense Objectsby Sparse Objects
Bones
Human Femur Robin Bone
![Page 5: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/5.jpg)
Approximating Dense Objectsby Sparse Objects
Graphs
Dense Graph Sparse Graph
How well can any graph be approximated by a sparse graph?
![Page 6: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/6.jpg)
First way to compare graphsDo graphs have nearly same weight on
corresponding cuts?
S S
![Page 7: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/7.jpg)
Second way to compare graphsDo their Laplacian matrices have nearly same
eigensystem?
5 -1 -1 -1 -1 -14 -1 -1 -1 -1
-1 -1 6 -1 -1 -1 -1-1 5 -1 -1 -1 -1
-1 -1 -1 7 -1 -1 -1 -1-1 -1 -1 5 -1 -1-1 -1 -1 5 -1 -1
-1 -1 -1 -1 6 -1 -1-1 -1 -1 -1 -1 5
-1 -1 -1 -1 -1 -1 6
6 -1 -55 -1 -3 -1
-1 2 -18 -8
-1 2 -11 -1
-3 -1 5 -12 -1 -1
-5 -1 -1 -1 8-1 -8 -1 10
![Page 8: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/8.jpg)
First way, more formally
Weight of cut: u(±(S)) w(±(S))
S
Edge weights u
S
Edge weights w
®-cut sparsifier: u(±(S)) · w(±(S)) · ®¢u(±(S)) 8S
Cut ±(S) = { edge st : s2S, tS }
![Page 9: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/9.jpg)
Second way, more formally
Lu = D-A =
7 -2 -5-2 3 -1-5 -1 16 -
10-
1010
abcd
a b c d
weighted degree of node
c
negative of u(ac)
Graph with weights u:ab
dc5 102 1
Laplacian Matrix:
![Page 10: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/10.jpg)
Second way, more formally
Def: A¹B , B-A is PSD , xTAx · xTBx 8x2Rn
®-spectral sparsifier: Lu ¹ Lw ¹ ®¢Lu
5 -1 -1 -1 -1 -14 -1 -1 -1 -1
-1 -1 6 -1 -1 -1 -1-1 5 -1 -1 -1 -1
-1 -1 -1 7 -1 -1 -1 -1-1 -1 -1 5 -1 -1-1 -1 -1 5 -1 -1
-1 -1 -1 -1 6 -1 -1-1 -1 -1 -1 -1 5
-1 -1 -1 -1 -1 -1 6
6 -1 -55 -1 -3 -1
-1 2 -18 -8
-1 2 -11 -1
-3 -1 5 -12 -1 -1
-5 -1 -1 -1 8-1 -8 -1 10
Edge weights u
Edge weights w
Lu = Lw =
![Page 11: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/11.jpg)
Thin trees
Let w be supported on a spanning tree®-thin tree: w(±(S)) · ®¢u(±(S)) 8S®-spectrally thin tree: Lw ¹ ®¢Lu
S
Edge weights u
S
Edge weights w
![Page 12: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/12.jpg)
Connectivity and Conductance
Connectivity: kst = min { u(±(S)) : s2S, tS }Global connectivity: K = min { ke : e2E }Effective Resistance from s to t: voltage difference when a 1-amp current source placed between s and tEffective Conductance: cst = 1 / (effective resistance from s to t)Global conductance: C = min { ce : e2E }Fact: cst · kst 8s,t.Example: cst =1/n but kst=1.Long paths affect conductance but not connectivity
Various Kappas: κ κ κ κ κ κ κ κ κ κκκ κ κ
s t
![Page 13: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/13.jpg)
Motivation for thin trees
Goddyn’s Conjecture: every graph has a O(1/K)-thin tree
O(1)-approximation for asymmetric TSPJaeger’s conjecture on nowhere-zero 3-flows [solved]Goddyn-Seymour conjecture on nowhere-zero 2+² flows
Spectrally thin trees may be a useful step towards thin trees
Edge weights u
Unweighted
![Page 14: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/14.jpg)
Intriguing Phenomenon
cut-sparsifier result involving connectivities holds
seemingly if and only if
spectral-sparsifier result involving conductances holds
![Page 15: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/15.jpg)
Uniform sampling
Recall K = min { ke : e2E }Karger Skeletons:
Define p = O( ²-2 log(n) / K )Sample every edge e with probability pGive every sampled edge e weight 1/p
Resulting graph is a (1+²)-cut sparsifier,and number of edges shrinks by factor O(p), whp.Spectral version: [unpublished]Replace K by C and “cut” by “spectral”
and C = min { ce : e2E }
spectral
C
Assume unweighted
![Page 16: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/16.jpg)
Uniformsampling
Cutsparsifier,
connectivityweights
Spectralsparsifier,conducta
nceweights
Karger
Unpublished
![Page 17: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/17.jpg)
Non-uniform sampling
Let ke be “strong connectivity” of edge eBenczur-Karger:
Define pe = O( ²-2 log(n) / ke )Sample every edge e with probability pe
Give every sampled edge e weight 1/pe
Resulting graph is a (1+²)-cut sparsifier andnumber of sampled edges is O(n log(n) ²-2), whp.Fung-Hariharan-Harvey-Panigrahi:Replace ke by ke and log(n) by log2(n).
ke
log2(n)
log2(n)*
*
*
Open QuestionImprove to
log(n)
![Page 18: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/18.jpg)
Non-uniform sampling
Let ke be “strong connectivity” of edge eBenczur-Karger:
Define pe = O( ²-2 log(n) / ke )Sample every edge e with probability pe
Give every sampled edge e weight 1/pe
Resulting graph is a (1+²)-cut sparsifier andnumber of sampled edges is O(n log(n) ²-2), whp.Spielman-Srivastava:Replace ke by ce and “cut” by “spectral”.
ce
spectral sparsifier
*
*
*
![Page 19: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/19.jpg)
Uniformsampling
Cutsparsifier,
connectivityweights
Spectralsparsifier,conducta
nceweights
Karger
Unpublished
Non-uniformsamplingBenczur-Karger
Spielman-Srivastava
Fung- Hariharan-
Harvey-Panigrahi
![Page 20: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/20.jpg)
Thin trees
Asadpour et al:Pick special distribution on spanning trees such thatevery edge e has Pr[ e in tree ] = £( 1/ K )Give every edge e in tree weight K
Resulting tree is an -cut thin treeMaximum entropy distribution worksChekuri et al: Pipage rounding also worksHarvey-Olver: Replace K by ce and “cut” by “spectral”
cecespectrally thin
![Page 21: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/21.jpg)
Uniformsampling
Cutsparsifier,
connectivityweights
Spectralsparsifier,conducta
nceweights
Karger
Unpublished
Non-uniformsamplingBenczur-Karger
Spielman-Srivastava
Fung- Hariharan-
Harvey-Panigrahi
O(log n / log log n)thin trees
Asadpouret al.
Harvey-Olver
Chekuri-Vondrak-
Zenklusen
![Page 22: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/22.jpg)
Linear-size sparsifiers
Batson-Spielman-Srivastava:Can efficiently construct a (1+²)-spectral sparsifierwith O( n²-2
) edges such that “on average”weight of each edge e is £( ²2
ce )Marcus-Spielman-Srivastava: Remove “on average”, but not efficient.Open question:Replace ce by ke and “spectral” by “cut”?
ke?
cut?
![Page 23: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/23.jpg)
Uniformsampling
Cutsparsifier,
connectivityweights
Spectralsparsifier,conducta
nceweights
Karger
Unpublished
Non-uniformsamplingBenczur-Karger
Spielman-Srivastava
Fung- Hariharan-
Harvey-Panigrahi
O(log n / log log n)thin trees
Asadpouret al.
Harvey-Olver
Linear-sizeSparsifiers
Batson-Spielman-Srivastava
Marcus-Spielman-Srivastava
?Chekuri-Vondrak-
Zenklusen
![Page 24: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/24.jpg)
Optimal thin trees
Suppose we have a (1+²)-spectral sparsifier such thatweight of every edge is we = £( ²2
ce )Any spanning tree T (with weights w) is (1+²)-spectrally thinOr, unweighted tree T is O(1/C )-spectrally thinThe same argument works if we replace ce by keand “spectrally thin” by “cut thin”.
weights u tree Tweights w
ke cut
cut
K cut
![Page 25: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/25.jpg)
Uniformsampling
Cutsparsifier,
connectivityweights
Spectralsparsifier,conducta
nceweights
Karger
Unpublished
Non-uniformsamplingBenczur-Karger
Spielman-Srivastava
Fung- Hariharan-
Harvey-Panigrahi
O(log n / log log n)thin trees
Asadpouret al.
Harvey-Olver
Linear-sizeSparsifiers
Batson-Spielman-Srivastava
Marcus-Spielman-Srivastava
?O(1)
thin trees
Corollary ofMSS
?Chekuri-Vondrak-
Zenklusen
![Page 26: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/26.jpg)
Given a graph G with eff. conductances ¸ C.Find an unweighted spanning subtree T with
Easy lower bound: ® ¸ 1.5.Easy upper bound: ® = O(log n), algorithmic (even deterministic).
Main Theorem: ® = , algorithmic (even deterministic).
Theorem [MSS]: ® = O(1), existential result only.
Spectrally Thin Trees
![Page 27: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/27.jpg)
Given an (unweighted) graph G with eff. conductances ¸ C.Can find an unweighted tree T with
Spectrally Thin Trees
Proof overview:1. Show independent sampling gives
spectral thinness, but not a tree.► Sample every edge e independently with
prob. xe=1/ce
2. Show dependent sampling gives a tree, and spectral thinness still works.
![Page 28: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/28.jpg)
Matrix ConcentrationGiven any random nxn, symmetric matrices Y1,…,Ym.Is there an analog of Chernoff bound showing that i Yiis probably “close” to E[i Yi]?
Theorem: [Tropp ‘12]Let Y1,…,Ym be independent, PSD matrices of size nxn.Let Y=i Yi and Z=E [ Y ]. Suppose Yi ¹ R¢Z a.s. Then
![Page 29: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/29.jpg)
Define sampling probabilities xe = 1/ce. It is known that e xe
= n–1.Claim: Independent sampling gives T µ E with E [|T|]=n–1 and
Theorem [Tropp ‘12]: Let M1,…,Mm be nxn PSD matrices.Let D(x) be a product distribution on {0,1}m with marginals x.Let Suppose Mi ¹ Z.ThenDefine Me = ce¢Le. Then Z = LG and Me ¹ Z holds.Setting ®=6 log n / log log n, we get whp.But T is not a tree!
Independent sampling
Laplacian of the single edge eProperties of conductances used
![Page 30: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/30.jpg)
Given an (unweighted) graph G with eff. conductances ¸ C.Can find an unweighted tree T with
Spectrally Thin Trees
Proof overview:1. Show independent sampling gives spectral thinness,
but not a tree.► Sample every edge e independently with prob.
xe=1/ce
2. Show dependent sampling gives a tree, and spectral thinness still works.► Run pipage rounding to get tree T with Pr[ e2T ] = xe =
1/ce
![Page 31: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/31.jpg)
Pipage rounding[Ageev-Svirideno ‘04, Srinivasan ‘01, Calinescu et al. ‘07, Chekuri et al. ‘09]
Let P be any matroid polytope.E.g., convex hull of characteristic vectors of spanning trees.Given fractional x
Find coordinates a and b s.t. linez x + z ( ea – eb ) stays in current faceFind two points where line leaves PRandomly choose one of thosepoints s.t. expectation is x
Repeat until x = ÂT is integral
x is a martingale: expectation of final ÂT is original fractional x.
ÂT1ÂT2
ÂT3
ÂT4
ÂT5
ÂT6
x
![Page 32: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/32.jpg)
Say f : Rm ! R is concave under swaps if z ! f( x + z(ea-eb) ) is concave 8x2P, 8a, b2[m].Let X0 be initial point and ÂT be final point visited by pipage rounding.Claim: If f concave under swaps then E[f(ÂT)] · f(X0). [Jensen]
Let E µ {0,1}m be an event.Let g : [0,1]m ! R be a pessimistic estimator for E, i.e.,
Claim: Suppose g is concave under swaps. Then Pr[ ÂT 2 E ] · g(X0).
Pipage rounding and concavity
(e.g. f is multilinear extension of a supermodular function)
![Page 33: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/33.jpg)
Chernoff BoundChernoff Bound: Fix any w, x 2 [0,1]m and let ¹ = wTx.Define . Then,
Claim: gt,µ is concave under swaps. [Elementary calculus]
Let X0 be initial point and ÂT be final point visited by pipage rounding.Let ¹ = wTX0. Then Bound achieved by independent sampling also achieved by pipage rounding
![Page 34: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/34.jpg)
Matrix Pessimistic Estimators
Main Theorem: gt,µ is concave under swaps.
Theorem [Tropp ‘12]: Let M1,…,Mm be nxn PSD matrices.Let D(x) be a product distribution on {0,1}m with marginals x.Let Suppose Mi ¹ Z.LetThen and .
Bound achieved by independent sampling also achieved by pipage rounding
Pessimistic estimator
![Page 35: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/35.jpg)
Given an (unweighted) graph G with eff. conductances ¸ C.Can find an unweighted tree T with
Spectrally Thin Trees
Proof overview:1. Show independent sampling gives spectral thinness,
but not a tree.► Sample every edge e independently with prob. xe=1/ce
2. Show dependent sampling gives a tree, and spectral thinness still works.► Run pipage rounding to get tree T with Pr[ e2T ] = xe =
1/ce
![Page 36: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/36.jpg)
Matrix AnalysisMatrix concentration inequalities are usually proven via sophisticated inequalities in matrix analysisRudelson: non-commutative Khinchine inequalityAhlswede-Winter: Golden-Thompson inequalityif A, B symmetric, then tr(eA+B) · tr(eA eB).Tropp: Lieb’s concavity inequality [1973]if A, B symmetric and C is PD, then z ! tr exp( A + log(C+zB) ) is concave.Key technical result: new variant of Lieb’s theoremif A symmetric, B1, B2 are PSD, and C1, C2 are PD, then z ! tr exp( A + log(C1+zB1) + log(C2–zB2) ) is concave.
![Page 37: Spectrally Thin Trees](https://reader035.fdocuments.in/reader035/viewer/2022062222/568166cb550346895ddad780/html5/thumbnails/37.jpg)
QuestionsO(1/C)-spectrally thin trees exist. Is
there an algorithm?Does sampling by edge connectivities give a cut sparsifierwith O(n log n) edges?Do O(1/K)-cut thin trees exist?
What about if we consider only the min cuts?
Do cut-sparsifiers with O(n²-2) edges exist for whichevery edge e has weight £(²2ke)?