CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2...
Transcript of CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2...
![Page 1: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/1.jpg)
CS672: Approximation AlgorithmsSpring 2020
Intro to Semidefinite Programming
Instructor: Shaddin Dughmi
![Page 2: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/2.jpg)
Outline
1 Basics of PSD Matrices
2 Semidefinite Programming
3 Max Cut
![Page 3: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/3.jpg)
Symmetric MatricesA matrix A ∈ Rn×n is symmetric if and only if it is square and Aij = Aji
for all i, j.We denote the cone of n× n symmetric matrices by Sn.
FactA matrix A ∈ Rn×n is symmetric if and only if it is orthogonallydiagonalizable.
i.e. A = QDQᵀ where Q is an orthogonal matrix andD = diag(λ1, . . . , λn).The columns of Q are the (normalized) eigenvectors of A, withcorresponding eigenvalues λ1, . . . , λnEquivalently: As a linear operator, A scales the space along anorthonormal basis QThe scaling factor λi along direction qi may be negative, positive,or 0.
Basics of PSD Matrices 1/13
![Page 4: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/4.jpg)
Symmetric MatricesA matrix A ∈ Rn×n is symmetric if and only if it is square and Aij = Aji
for all i, j.We denote the cone of n× n symmetric matrices by Sn.
FactA matrix A ∈ Rn×n is symmetric if and only if it is orthogonallydiagonalizable.
i.e. A = QDQᵀ where Q is an orthogonal matrix andD = diag(λ1, . . . , λn).The columns of Q are the (normalized) eigenvectors of A, withcorresponding eigenvalues λ1, . . . , λnEquivalently: As a linear operator, A scales the space along anorthonormal basis QThe scaling factor λi along direction qi may be negative, positive,or 0.
Basics of PSD Matrices 1/13
![Page 5: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/5.jpg)
Symmetric MatricesA matrix A ∈ Rn×n is symmetric if and only if it is square and Aij = Aji
for all i, j.We denote the cone of n× n symmetric matrices by Sn.
FactA matrix A ∈ Rn×n is symmetric if and only if it is orthogonallydiagonalizable.
i.e. A = QDQᵀ where Q is an orthogonal matrix andD = diag(λ1, . . . , λn).The columns of Q are the (normalized) eigenvectors of A, withcorresponding eigenvalues λ1, . . . , λnEquivalently: As a linear operator, A scales the space along anorthonormal basis QThe scaling factor λi along direction qi may be negative, positive,or 0.
Basics of PSD Matrices 1/13
![Page 6: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/6.jpg)
Positive Semi-Definite MatricesA matrix A ∈ Rn×n is positive semi-definite if it is symmetric andmoreover all its eigenvalues are nonnegative.
We denote the cone of n× n positive semi-definite matrices by Sn+
We use A � 0 as shorthand for A ∈ Sn+
A = QDQᵀ where Q is an orthogonal matrix andD = diag(λ1, . . . , λn), where λi ≥ 0.As a linear operator, A performs nonnegative scaling along anorthonormal basis Q
NotePositive definite, negative semi-definite, and negative definite definedsimilarly.
Basics of PSD Matrices 2/13
![Page 7: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/7.jpg)
Positive Semi-Definite MatricesA matrix A ∈ Rn×n is positive semi-definite if it is symmetric andmoreover all its eigenvalues are nonnegative.
We denote the cone of n× n positive semi-definite matrices by Sn+
We use A � 0 as shorthand for A ∈ Sn+
A = QDQᵀ where Q is an orthogonal matrix andD = diag(λ1, . . . , λn), where λi ≥ 0.As a linear operator, A performs nonnegative scaling along anorthonormal basis Q
NotePositive definite, negative semi-definite, and negative definite definedsimilarly.
Basics of PSD Matrices 2/13
![Page 8: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/8.jpg)
Positive Semi-Definite MatricesA matrix A ∈ Rn×n is positive semi-definite if it is symmetric andmoreover all its eigenvalues are nonnegative.
We denote the cone of n× n positive semi-definite matrices by Sn+
We use A � 0 as shorthand for A ∈ Sn+
A = QDQᵀ where Q is an orthogonal matrix andD = diag(λ1, . . . , λn), where λi ≥ 0.As a linear operator, A performs nonnegative scaling along anorthonormal basis Q
NotePositive definite, negative semi-definite, and negative definite definedsimilarly.
Basics of PSD Matrices 2/13
![Page 9: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/9.jpg)
Geometric Intuition for PSD Matrices
For A � 0, let q1, . . . , qn be the orthonormal eigenbasis for A, andlet λ1, . . . , λn ≥ 0 be the corresponding eigenvalues.The linear operator x→ Ax scales the qi component of x by λiWhen applied to every x in the unit ball, the image of A is anellipsoid centered at the origin with principal directions q1, . . . , qnand corresponding diameters 2λ1, . . . , 2λn
When A is positive definite (i.e.λi > 0), and therefore invertible, theellipsoid is the set
{y : yT (AAT )−1y ≤ 1
}Basics of PSD Matrices 3/13
![Page 10: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/10.jpg)
Useful Properties of PSD Matrices
If A � 0, thenxTAx ≥ 0 for all xA has a positive semi-definite square root A
12
A12 = Qdiag(
√λ1, . . . ,
√λn)Q
ᵀ
A = BTB for some matrix B.Interpretation: PSD matrices encode the “pairwise similarity”relationships of a family of vectors. Aij is dot product of the ith andjth columns of B.Interpretation: The quadratic form xTAx is the length of a lineartransformation of x, namely ||Bx||22
The quadratic function xTAx is convexA can be expressed as a sum of vector outer-products
e.g., A =∑n
i=1 vivTi for ~vi =
√λi~qi
As it turns out, each of the above is also sufficient for A � 0 (assumingA is symmetric).
Basics of PSD Matrices 4/13
![Page 11: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/11.jpg)
Useful Properties of PSD Matrices
If A � 0, thenxTAx ≥ 0 for all xA has a positive semi-definite square root A
12
A12 = Qdiag(
√λ1, . . . ,
√λn)Q
ᵀ
A = BTB for some matrix B.Interpretation: PSD matrices encode the “pairwise similarity”relationships of a family of vectors. Aij is dot product of the ith andjth columns of B.Interpretation: The quadratic form xTAx is the length of a lineartransformation of x, namely ||Bx||22
The quadratic function xTAx is convexA can be expressed as a sum of vector outer-products
e.g., A =∑n
i=1 vivTi for ~vi =
√λi~qi
As it turns out, each of the above is also sufficient for A � 0 (assumingA is symmetric).
Basics of PSD Matrices 4/13
![Page 12: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/12.jpg)
Properties of PSD Matrices Relevant for Computation
The set of PSD matrices is convexFollows from the characterization: xTAx ≥ 0 for all x
The set of PSD matrices admits an efficient separation oracleGiven A , find eigenvector v with negative eigenvalue: vTAv < 0.
A PSD matrix A ∈ Rn×n implicitly encodes the “pairwisesimilarities” of a family of vectors b1, . . . , bn ∈ Rn.
Follows from the characterization A = BTB for some BAij = 〈bi, bj〉
Can convert between A and B efficiently.B to A: Matrix multiplicationA to B: B can be expressed in terms of eigenvectors/eigenvalues ofA, which can be easily computed to arbitrary precision via poweringmethods. Alternatively: Cholesky decomposition, SVD, . . . .
Basics of PSD Matrices 5/13
![Page 13: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/13.jpg)
Outline
1 Basics of PSD Matrices
2 Semidefinite Programming
3 Max Cut
![Page 14: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/14.jpg)
Convex Optimization
min (or max) f(x)subject to x ∈ X
Convex Set
Convex Optimization ProblemGeneralization of LP where
Feasible set X convex: αx+ (1− α)y ∈ X , for all x, y ∈ X andα ∈ [0, 1]
Objective function f is convex in case of minimizationf(αx+(1−α)y) ≤ αf(x)+ (1−α)f(y) for all x, y ∈ X and α ∈ [0, 1]
Objective function f is concave in case of maximization
Semidefinite Programming 6/13
![Page 15: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/15.jpg)
Convex Optimization
min (or max) f(x)subject to x ∈ X
Convex Set
Convex Optimization Problems Solvable efficiently (i.e. in polynomialtime) to arbitrary precision under mild conditions
Separation oracle for XFirst-order oracle for evaluating f(x) and 5f(x).
For more detailTake CSCI 675!
Semidefinite Programming 6/13
![Page 16: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/16.jpg)
Semidefinite ProgramsThese are Optimization problems where the feasible set is the cone ofPSD cone, possibly intersected with linear constraints.
Generalization of LP.Special case of Convex Optimization.
maximize cᵀxsubject to Ax � b
x1F1 + x2F2 . . . xnFn +G is PSD
F1, . . . , Fn, G, and A are given matrices, and c, b are given vectors.
Semidefinite Programming 7/13
![Page 17: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/17.jpg)
Semidefinite ProgramsThese are Optimization problems where the feasible set is the cone ofPSD cone, possibly intersected with linear constraints.
Generalization of LP.Special case of Convex Optimization.
maximize cᵀxsubject to Ax � b
x1F1 + x2F2 . . . xnFn +G is PSD
F1, . . . , Fn, G, and A are given matrices, and c, b are given vectors.
ExamplesFitting a distribution, say a Gaussian, to observed data. Variable isa positive semi-definite covariance matrix.As a relaxation to combinatorial problems that encode pairwiserelationships: e.g. finding the maximum cut of a graph.
Semidefinite Programming 7/13
![Page 18: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/18.jpg)
Semidefinite ProgramsThese are Optimization problems where the feasible set is the cone ofPSD cone, possibly intersected with linear constraints.
Generalization of LP.Special case of Convex Optimization.
maximize cᵀxsubject to Ax � b
x1F1 + x2F2 . . . xnFn +G is PSD
F1, . . . , Fn, G, and A are given matrices, and c, b are given vectors.
FactSDP can be solved in polytime to arbitrary precision, since PSDconstraints admit a polytime separation oracle.
Semidefinite Programming 7/13
![Page 19: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/19.jpg)
Outline
1 Basics of PSD Matrices
2 Semidefinite Programming
3 Max Cut
![Page 20: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/20.jpg)
The Max Cut ProblemGiven an undirected graph G = (V,E), find a partition of V into(S, V \ S) maximizing number of edges with exactly one end in S.
maximize∑
(i,j)∈E1−xixj
2
subject to xi ∈ {−1, 1} , for i ∈ V.
Instead of requiring xi to be on the 1 dimensional sphere, we relax andpermit it to be in the n-dimensional sphere, where n = |V |.
Vector Program relaxation
maximize∑
(i,j)∈E1−~vi·~vj
2
subject to ||~vi||2 = 1, for i ∈ V.~vi ∈ Rn, for i ∈ V.
Max Cut 8/13
![Page 21: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/21.jpg)
The Max Cut ProblemGiven an undirected graph G = (V,E), find a partition of V into(S, V \ S) maximizing number of edges with exactly one end in S.
maximize∑
(i,j)∈E1−xixj
2
subject to xi ∈ {−1, 1} , for i ∈ V.
Instead of requiring xi to be on the 1 dimensional sphere, we relax andpermit it to be in the n-dimensional sphere, where n = |V |.
Vector Program relaxation
maximize∑
(i,j)∈E1−~vi·~vj
2
subject to ||~vi||2 = 1, for i ∈ V.~vi ∈ Rn, for i ∈ V.
Max Cut 8/13
![Page 22: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/22.jpg)
SDP Relaxation
Recall: A symmetric n× n matrix Y is PSD iff Y = V TV for n× nmatrix VEquivalently: PSD matrices encode pairwise dot products ofcolumns of VWhen diagonal entries of Y are 1, V has unit length columnsRecall: Y and V can be recovered from each other efficiently
Vector Program relaxation
maximize∑
(i,j)∈E1−~vi·~vj
2
subject to ||~vi||2 = 1, for i ∈ V.~vi ∈ Rn, for i ∈ V.
SDP Relaxation
maximize∑
(i,j)∈E1−Yij
2
subject to Yii = 1, for i ∈ V.Y ∈ Sn
+
Max Cut 9/13
![Page 23: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/23.jpg)
SDP Relaxation
Recall: A symmetric n× n matrix Y is PSD iff Y = V TV for n× nmatrix VEquivalently: PSD matrices encode pairwise dot products ofcolumns of VWhen diagonal entries of Y are 1, V has unit length columnsRecall: Y and V can be recovered from each other efficiently
Vector Program relaxation
maximize∑
(i,j)∈E1−~vi·~vj
2
subject to ||~vi||2 = 1, for i ∈ V.~vi ∈ Rn, for i ∈ V.
SDP Relaxation
maximize∑
(i,j)∈E1−Yij
2
subject to Yii = 1, for i ∈ V.Y ∈ Sn
+
Max Cut 9/13
![Page 24: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/24.jpg)
Goemans Williamson Algorithm forMax Cut
1 Solve the SDP to get Y � 0
2 Decompose Y to V V T
3 Draw random vector r on unit sphere4 Place nodes i with vi · r ≥ 0 on one
side of cut, the rest on the other side
SDP Relaxation
maximize∑
(i,j)∈E1−Yij
2
subject to Yii = 1 ∀iY ∈ Sn
+
Max Cut 10/13
![Page 25: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/25.jpg)
We will prove the following Lemma
LemmaThe random hyperplane cuts each edge (i, j) with probability at least0.878
1−Yij
2
Therefore, by linearity of expectations, and the fact thatOPTSDP ≥ OPT (i.e. relaxation).
TheoremThe Goemans Williamson algorithm outputs a random cut of expectedsize at least 0.878 OPT .
Max Cut 11/13
![Page 26: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/26.jpg)
We will prove the following Lemma
LemmaThe random hyperplane cuts each edge (i, j) with probability at least0.878
1−Yij
2
Therefore, by linearity of expectations, and the fact thatOPTSDP ≥ OPT (i.e. relaxation).
TheoremThe Goemans Williamson algorithm outputs a random cut of expectedsize at least 0.878 OPT .
Max Cut 11/13
![Page 27: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/27.jpg)
We use the following fact
FactFor all angles θ ∈ [0, π],
θ
π≥ 0.878 · 1− cos(θ)
2
Max Cut 12/13
![Page 28: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/28.jpg)
LemmaThe random hyperplane cuts each edge (i, j) with probability at least0.878
1−Yij
2
(i, j) is cut iff sign(r · vi) 6= sign(r · vj)Can zoom in on the 2-d plane which includes vi and vj
Discard component r perpendicular to that plane, leaving r̂Direction of r̂ is uniform in the plane
Let θij be angle between vi and vj . Note Yij = vi · vj = cos(θij)r̂ cuts (i, j) w.p.
2θij2π
=θijπ≥ 0.878
1− cos θij2
= 0.8781− Yij
2
Max Cut 13/13
![Page 29: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/29.jpg)
LemmaThe random hyperplane cuts each edge (i, j) with probability at least0.878
1−Yij
2
(i, j) is cut iff sign(r · vi) 6= sign(r · vj)
Can zoom in on the 2-d plane which includes vi and vjDiscard component r perpendicular to that plane, leaving r̂Direction of r̂ is uniform in the plane
Let θij be angle between vi and vj . Note Yij = vi · vj = cos(θij)r̂ cuts (i, j) w.p.
2θij2π
=θijπ≥ 0.878
1− cos θij2
= 0.8781− Yij
2
Max Cut 13/13
![Page 30: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/30.jpg)
LemmaThe random hyperplane cuts each edge (i, j) with probability at least0.878
1−Yij
2
(i, j) is cut iff sign(r · vi) 6= sign(r · vj)Can zoom in on the 2-d plane which includes vi and vj
Discard component r perpendicular to that plane, leaving r̂Direction of r̂ is uniform in the plane
Let θij be angle between vi and vj . Note Yij = vi · vj = cos(θij)r̂ cuts (i, j) w.p.
2θij2π
=θijπ≥ 0.878
1− cos θij2
= 0.8781− Yij
2
Max Cut 13/13
![Page 31: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/31.jpg)
LemmaThe random hyperplane cuts each edge (i, j) with probability at least0.878
1−Yij
2
(i, j) is cut iff sign(r · vi) 6= sign(r · vj)Can zoom in on the 2-d plane which includes vi and vj
Discard component r perpendicular to that plane, leaving r̂Direction of r̂ is uniform in the plane
Let θij be angle between vi and vj . Note Yij = vi · vj = cos(θij)
r̂ cuts (i, j) w.p.2θij2π
=θijπ≥ 0.878
1− cos θij2
= 0.8781− Yij
2
Max Cut 13/13
![Page 32: CS672: Approximation Algorithms Spring 2020 Intro to ...SDP Relaxation maximize P (i;j)2E 1 Y ij 2 subject to Y ii = 1; for i2V: Y 2Sn + Max Cut 9/13 Goemans Williamson Algorithm for](https://reader036.fdocuments.in/reader036/viewer/2022062610/6104980fdf061116e34ebf80/html5/thumbnails/32.jpg)
LemmaThe random hyperplane cuts each edge (i, j) with probability at least0.878
1−Yij
2
(i, j) is cut iff sign(r · vi) 6= sign(r · vj)Can zoom in on the 2-d plane which includes vi and vj
Discard component r perpendicular to that plane, leaving r̂Direction of r̂ is uniform in the plane
Let θij be angle between vi and vj . Note Yij = vi · vj = cos(θij)r̂ cuts (i, j) w.p.
2θij2π
=θijπ≥ 0.878
1− cos θij2
= 0.8781− Yij
2
Max Cut 13/13