Riemannian Optimization with its Application to Blind ...whuang2/pdf/ECNU_slides.pdfLuenberger...
Transcript of Riemannian Optimization with its Application to Blind ...whuang2/pdf/ECNU_slides.pdfLuenberger...
-
1/64
Riemannian Optimization Blind deconvolution Summary
Riemannian Optimization with its Application toBlind Deconvolution Problem
Wen Huang
Rice University
April 23, 2018
Wen Huang Rice University
Riemannian Optimization
-
2/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Motivations
Riemannian Optimization
Problem: Given f (x) :M→ R, solve
minx∈M
f (x)
where M is a Riemannian manifold.
M
Rf
Wen Huang Rice University
Riemannian Optimization
-
3/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Motivations
Examples of Manifolds
Sphere Ellipsoid
Stiefel manifold: St(p, n) = {X ∈ Rn×p|X T X = Ip}Grassmann manifold: Set of all p-dimensional subspaces of Rn
Set of fixed rank m-by-n matrices
And many more
Wen Huang Rice University
Riemannian Optimization
-
4/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Motivations
Riemannian Manifolds
Roughly, a Riemannian manifold M is a smooth set with asmoothly-varying inner product on the tangent spaces.
M
x
ξ
η
R
〈η, ξ〉xTxM
Wen Huang Rice University
Riemannian Optimization
-
5/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Motivations
Applications
Three applications are used to demonstrate the importance of theRiemannian optimization:
Independent component analysis [CS93]
Matrix completion problem [Van13, HAGH16]
Elastic shape analysis of curves [SKJJ11, HGSA15]
Wen Huang Rice University
Riemannian Optimization
-
6/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Motivations
Application: Independent Component Analysis
People 1
People p
People 2
Microphone 1
Microphone n
Microphone 2
s(t) ∈ Rp
IC 1
IC p
IC 2
x(t) ∈ Rn
Cocktail party problem
ICA
Observed signal is x(t) = As(t)
One approach:
Assumption: E{s(t)s(t + τ)} is diagonal for all τCτ (x) := E{x(t)x(x + τ)T} = AE{s(t)s(t + τ)T}AT
Wen Huang Rice University
Riemannian Optimization
-
7/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Motivations
Application: Independent Component Analysis
Minimize joint diagonalization cost function on the Stiefel manifold[TI06]:
f : St(p, n)→ R : V 7→N∑
i=1
‖V T Ci V − diag(V T Ci V )‖2F .
C1, . . . ,CN are covariance matrices andSt(p, n) = {X ∈ Rn×p|X T X = Ip}.
Wen Huang Rice University
Riemannian Optimization
-
8/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Motivations
Application: Matrix Completion Problem
Matrix completion problem
User 1
User 2
User m
Movie 1 Movie 2 Movie n
Rate matrix M
1
53
4
4
5 3
15
2
The matrix M is sparse
The goal: complete the matrix M
Wen Huang Rice University
Riemannian Optimization
-
9/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Motivations
Application: Matrix Completion Problem
movies meta-user meta-moviea11 a14
a24a33
a41a52 a53
=
b11 b12b21 b22b31 b32b41 b42b51 b52
(
c11 c12 c13 c14c21 c22 c23 c24
)
Minimize the cost function
f : Rm×nr → R : X 7→ f (X ) = ‖PΩM − PΩX‖2F .
Rm×nr is the set of m-by-n matrices with rank r . It is known to be aRiemannian manifold.
Wen Huang Rice University
Riemannian Optimization
-
10/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Motivations
Application: Elastic Shape Analysis of Curves
1 2 3 4 5 6 7 8
9 10 11 12 13 14 15 16
17 18 19 20 21 22 23 24
25 26 27 28 29 30 31 32
Classification[LKS+12, HGSA15]
Face recognition[DBS+13]
Wen Huang Rice University
Riemannian Optimization
-
11/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Motivations
Application: Elastic Shape Analysis of Curves
Elastic shape analysis invariants:
Rescaling
Translation
Rotation
Reparametrization
The shape space is a quotient space
Figure: All are the same shape.
Wen Huang Rice University
Riemannian Optimization
-
12/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Motivations
Application: Elastic Shape Analysis of Curves
shape 1 shape 2
q1
q̃2
q2
[q1] [q2]
Optimization problem minq2∈[q2] dist(q1, q2) is defined on aRiemannian manifold
Computation of a geodesic between two shapes
Computation of Karcher mean of a population of shapes
Wen Huang Rice University
Riemannian Optimization
-
13/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Motivations
More Applications
Role model extraction [MHB+16]
Computations on SPD matrices [YHAG17]
Phase retrieval problem [HGZ17]
Blind deconvolution [HH17]
Synchronization of rotations [Hua13]
Computations on low-rank tensor
Low-rank approximate solution for Lyapunov equation
Wen Huang Rice University
Riemannian Optimization
-
14/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Motivations
Comparison with Constrained Optimization
All iterates on the manifold
Convergence properties of unconstrained optimization algorithms
No need to consider Lagrange multipliers or penalty functions
Exploit the structure of the constrained set
M
Wen Huang Rice University
Riemannian Optimization
-
15/64
Riemannian Optimization Blind deconvolution Summary
Optimization Framework and History
Iterations on the Manifold
Consider the following generic update for an iterative Euclideanoptimization algorithm:
xk+1 = xk + ∆xk = xk + αk sk .
This iteration is implemented in numerous ways, e.g.:Steepest descent: xk+1 = xk − αk∇f (xk )Newton’s method: xk+1 = xk −
[∇2f (xk )
]−1∇f (xk )Trust region method: ∆xk is set by optimizing a local model.
Riemannian Manifolds Provide
Riemannian concepts describingdirections and movement on themanifold
Riemannian analogues for gradientand Hessian
xk xk + dk
Wen Huang Rice University
Riemannian Optimization
-
16/64
Riemannian Optimization Blind deconvolution Summary
Optimization Framework and History
Riemannian gradient and Riemannian Hessian
Definition
The Riemannian gradient of f at x is the unique tangent vector in Tx Msatisfying ∀η ∈ Tx M, the directional derivative
D f (x)[η] = 〈grad f (x), η〉
and grad f (x) is the direction of steepest ascent.
Definition
The Riemannian Hessian of f at x is a symmetric linear operator fromTx M to Tx M defined as
Hess f (x) : Tx M → Tx M : η → ∇η grad f ,
where ∇ is the affine connection.
Wen Huang Rice University
Riemannian Optimization
-
17/64
Riemannian Optimization Blind deconvolution Summary
Optimization Framework and History
Retractions
Euclidean Riemannianxk+1 = xk + αk dk xk+1 = Rxk (αkηk )
Definition
A retraction is a mapping R from TM to Msatisfying the following:
R is continuously differentiable
Rx (0) = x
DRx (0)[η] = η
maps tangent vectors back to the manifold
defines curves in a direction
η
x Rx (tη)
TxMx
η
Rx (η)
MWen Huang Rice University
Riemannian Optimization
-
18/64
Riemannian Optimization Blind deconvolution Summary
Optimization Framework and History
Categories of Riemannian optimization methods
Retraction-based: local information only
Line search-based: use local tangent vector and Rx (tη) to define line
Steepest decent
Newton
Local model-based: series of flat space problems
Riemannian trust region Newton (RTR)
Riemannian adaptive cubic overestimation (RACO)
Wen Huang Rice University
Riemannian Optimization
-
19/64
Riemannian Optimization Blind deconvolution Summary
Optimization Framework and History
Categories of Riemannian optimization methods
Retraction and transport-based: information from multiple tangent spaces
Nonlinear conjugate gradient: multiple tangent vectors
Quasi-Newton e.g. Riemannian BFGS: transport operators betweentangent spaces
Additional element required for optimizing a cost function (M, g):
formulas for combining information from multiple tangent spaces.
Wen Huang Rice University
Riemannian Optimization
-
20/64
Riemannian Optimization Blind deconvolution Summary
Optimization Framework and History
Vector Transports
Vector Transport
Vector transport: Transport a tangentvector from one tangent space toanother
Tηx ξx , denotes transport of ξx totangent space of Rx (ηx ). R is aretraction associated with T
x
M
TxM
ηx
Rx(ηx)
ξx
Tηxξx
Figure: Vector transport.
Wen Huang Rice University
Riemannian Optimization
-
21/64
Riemannian Optimization Blind deconvolution Summary
Optimization Framework and History
Retraction/Transport-based Riemannian Optimization
Given a retraction and a vector transport, we can generalize manyEuclidean methods to the Riemannian setting. Do the Riemannianversions of the methods work well?
No
Lose many theoretical results and important properties;
Impose restrictions on retraction/vector transport;
Wen Huang Rice University
Riemannian Optimization
-
21/64
Riemannian Optimization Blind deconvolution Summary
Optimization Framework and History
Retraction/Transport-based Riemannian Optimization
Given a retraction and a vector transport, we can generalize manyEuclidean methods to the Riemannian setting. Do the Riemannianversions of the methods work well?
No
Lose many theoretical results and important properties;
Impose restrictions on retraction/vector transport;
Wen Huang Rice University
Riemannian Optimization
-
22/64
Riemannian Optimization Blind deconvolution Summary
Optimization Framework and History
Retraction/Transport-based Riemannian Optimization
Benefits
Increased generality does not compromise the important theory
Less expensive than or similar to previous approaches
May provide theory to explain behavior of algorithms specificallydeveloped for a particular application – or closely related ones
Possible Problems
May be inefficient compared to algorithms that exploit applicationdetails
Wen Huang Rice University
Riemannian Optimization
-
23/64
Riemannian Optimization Blind deconvolution Summary
Optimization Framework and History
Some History of Optimization On Manifolds (I)
Luenberger (1973), Introduction to linear and nonlinear programming.Luenberger mentions the idea of performing line search along geodesics,“which we would use if it were computationally feasible (which itdefinitely is not)”. Rosen (1961) essentially anticipated this but was notexplicit in his Gradient Projection Algorithm.
Gabay (1982), Minimizing a differentiable function over a differentialmanifold. Steepest descent along geodesics; Newton’s method alonggeodesics; Quasi-Newton methods along geodesics. On Riemanniansubmanifolds of Rn.
Smith (1993-94), Optimization techniques on Riemannian manifolds.Levi-Civita connection ∇; Riemannian exponential mapping; paralleltranslation.
Wen Huang Rice University
Riemannian Optimization
-
24/64
Riemannian Optimization Blind deconvolution Summary
Optimization Framework and History
Some History of Optimization On Manifolds (II)
The “pragmatic era” begins:
Manton (2002), Optimization algorithms exploiting unitary constraints“The present paper breaks with tradition by not moving alonggeodesics”. The geodesic update Expx η is replaced by a projectiveupdate π(x + η), the projection of the point x + η onto the manifold.
Adler, Dedieu, Shub, et al. (2002), Newton’s method on Riemannianmanifolds and a geometric model for the human spine. The exponentialupdate is relaxed to the general notion of retraction. The geodesic canbe replaced by any (smoothly prescribed) curve tangent to the searchdirection.
Absil, Mahony, Sepulchre (2007) Nonlinear conjugate gradient usingretractions.
Wen Huang Rice University
Riemannian Optimization
-
25/64
Riemannian Optimization Blind deconvolution Summary
Optimization Framework and History
Some History of Optimization On Manifolds (III)
Theory, efficiency, and library design improve dramatically:
Absil, Baker, Gallivan (2004-07), Theory and implementations ofRiemannian Trust Region method. Retraction-based approach. Matrixmanifold problems, software repository
http://www.math.fsu.edu/~cbaker/GenRTR
Anasazi Eigenproblem package in Trilinos Library at Sandia NationalLaboratory
Absil, Gallivan, Qi (2007-10), Basic theory and implementations ofRiemannian BFGS and Riemannian Adaptive Cubic Overestimation.Parallel translation and Exponential map theory, Retraction and vectortransport empirical evidence.
Wen Huang Rice University
Riemannian Optimization
http://www.math.fsu.edu/~cbaker/GenRTR
-
26/64
Riemannian Optimization Blind deconvolution Summary
Optimization Framework and History
Some History of Optimization On Manifolds (IV)
Ring and With (2012), combination of differentiated retraction andisometric vector transport for convergence analysis of RBFGS
Absil, Gallivan, Huang (2009-2017), Complete theory of RiemannianQuasi-Newton and related transport/retraction conditions, RiemannianSR1 with trust-region, RBFGS on partly smooth problems, A C++library: http://www.math.fsu.edu/~whuang2/ROPTLIB
Sato, Iwai (2013-2015), Zhu (2017), Global convergence analysis forRiemannian conjugate gradient methods
Bonnabel (2011), Sato, Kasai, Mishra(2017) Riemannian stochasticgradient descent method.
Many people Application interests increase noticeably
Wen Huang Rice University
Riemannian Optimization
http://www.math.fsu.edu/~whuang2/ROPTLIB
-
27/64
Riemannian Optimization Blind deconvolution Summary
Optimization Framework and History
Current UCL/FSU Methods
Riemannian Steepest Descent [AMS08]
Riemannian conjugate gradient [AMS08]
Riemannian Trust Region Newton [ABG07]: global, quadraticconvergence
Riemannian Broyden Family [HGA15, HAG18] : global (convex),superlinear convergence
Riemannian Trust Region SR1 [HAG15]: global, (d + 1)−superlinearconvergence
For large problems
Limited memory RTRSR1Limited memory RBFGS
Wen Huang Rice University
Riemannian Optimization
-
28/64
Riemannian Optimization Blind deconvolution Summary
Optimization Framework and History
Current UCL/FSU Methods
Riemannian manifold optimization library (ROPTLIB) is used to optimizea function on a manifold.
Most state-of-the-art methods;
Commonly-encountered manifolds;
Written in C++;
Interfaces with Matlab, Julia and R;
BLAS and LAPACK;
www.math.fsu.edu/~whuang2/Indices/index_ROPTLIB.html
Wen Huang Rice University
Riemannian Optimization
www.math.fsu.edu/~whuang2/Indices/index_ROPTLIB.html
-
29/64
Riemannian Optimization Blind deconvolution Summary
Optimization Framework and History
Current/Future Work on Riemannian methods
Manifold and inequality constraints
Discretization of infinite dimensional manifolds and theconvergence/accuracy of the approximate minimizers – specific to aproblem and extracting general conclusions
Partly smooth cost functions on Riemannian manifold
Limited-memory quasi-Newton methods on manifolds
Wen Huang Rice University
Riemannian Optimization
-
30/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Blind deconvolution
[Blind deconvolution]
Blind deconvolution is to recover two unknown signals from theirconvolution.
Wen Huang Rice University
Riemannian Optimization
-
30/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Blind deconvolution
[Blind deconvolution]
Blind deconvolution is to recover two unknown signals from theirconvolution.
Wen Huang Rice University
Riemannian Optimization
-
30/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Blind deconvolution
[Blind deconvolution]
Blind deconvolution is to recover two unknown signals from theirconvolution.
Wen Huang Rice University
Riemannian Optimization
-
31/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Problem Statement
[Blind deconvolution (Discretized version)]
Blind deconvolution is to recover two unknown signals w ∈ CL andx ∈ CL from their convolution y = w ∗ x ∈ CL.
We only consider circular convolution:y1y2y3...yL
=w1 wL wL−1 . . . w2w2 w1 wL . . . w3w3 w2 w1 . . . w4...
......
. . ....
wL wL−1 wL−2 . . . w1
x1x2x3...xL
Let y = Fy, w = Fw, and x = Fx, where F is the DFT matrix;
y = w � x , where � is the Hadamard product, i.e., yi = wi xi .Equivalent question: Given y , find w and x .
Wen Huang Rice University
Riemannian Optimization
-
32/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Problem Statement
Problem: Given y ∈ CL, find w , x ∈ CL so that y = w � x .
An ill-posed problem. Infinite solutions exist;
Assumption: w and x are in known subspaces, i.e., w = Bh andx = Cm, B ∈ CL×K and C ∈ CL×N ;
Reasonable in various applications;
Leads to mathematical rigor; (L/(K + N) reasonably large)
Problem under the assumption
Given y ∈ CL, B ∈ CL×K and C ∈ CL×N , find h ∈ CK and m ∈ CN sothat
y = Bh � Cm = diag(Bhm∗C∗).
Wen Huang Rice University
Riemannian Optimization
-
32/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Problem Statement
Problem: Given y ∈ CL, find w , x ∈ CL so that y = w � x .
An ill-posed problem. Infinite solutions exist;
Assumption: w and x are in known subspaces, i.e., w = Bh andx = Cm, B ∈ CL×K and C ∈ CL×N ;
Reasonable in various applications;
Leads to mathematical rigor; (L/(K + N) reasonably large)
Problem under the assumption
Given y ∈ CL, B ∈ CL×K and C ∈ CL×N , find h ∈ CK and m ∈ CN sothat
y = Bh � Cm = diag(Bhm∗C∗).
Wen Huang Rice University
Riemannian Optimization
-
32/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Problem Statement
Problem: Given y ∈ CL, find w , x ∈ CL so that y = w � x .
An ill-posed problem. Infinite solutions exist;
Assumption: w and x are in known subspaces, i.e., w = Bh andx = Cm, B ∈ CL×K and C ∈ CL×N ;
Reasonable in various applications;
Leads to mathematical rigor; (L/(K + N) reasonably large)
Problem under the assumption
Given y ∈ CL, B ∈ CL×K and C ∈ CL×N , find h ∈ CK and m ∈ CN sothat
y = Bh � Cm = diag(Bhm∗C∗).
Wen Huang Rice University
Riemannian Optimization
-
32/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Problem Statement
Problem: Given y ∈ CL, find w , x ∈ CL so that y = w � x .
An ill-posed problem. Infinite solutions exist;
Assumption: w and x are in known subspaces, i.e., w = Bh andx = Cm, B ∈ CL×K and C ∈ CL×N ;
Reasonable in various applications;
Leads to mathematical rigor; (L/(K + N) reasonably large)
Problem under the assumption
Given y ∈ CL, B ∈ CL×K and C ∈ CL×N , find h ∈ CK and m ∈ CN sothat
y = Bh � Cm = diag(Bhm∗C∗).
Wen Huang Rice University
Riemannian Optimization
-
33/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Related work
Ahmed et al. [ARR14]1
Convex problem:
minX∈CK×N
‖X‖n, s. t. y = diag(BXC∗),
where ‖ · ‖n denotes the nuclear norm, and X = hm∗;
(Theoretical result): the unique minimizerhigh probability
============= the truesolution;
The convex problem is expensive to solve;
1A. Ahmed, B. Recht, and J. Romberg, Blind deconvolution using convexprogramming, IEEE Transactions on Information Theory, 60:1711-1732, 2014
Wen Huang Rice University
Riemannian Optimization
Find h,m, s. t. y = diag(Bhm∗C∗);
-
33/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Related work
Ahmed et al. [ARR14]1
Convex problem:
minX∈CK×N
‖X‖n, s. t. y = diag(BXC∗),
where ‖ · ‖n denotes the nuclear norm, and X = hm∗;
(Theoretical result): the unique minimizerhigh probability
============= the truesolution;
The convex problem is expensive to solve;
1A. Ahmed, B. Recht, and J. Romberg, Blind deconvolution using convexprogramming, IEEE Transactions on Information Theory, 60:1711-1732, 2014
Wen Huang Rice University
Riemannian Optimization
Find h,m, s. t. y = diag(Bhm∗C∗);
-
33/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Related work
Ahmed et al. [ARR14]1
Convex problem:
minX∈CK×N
‖X‖n, s. t. y = diag(BXC∗),
where ‖ · ‖n denotes the nuclear norm, and X = hm∗;
(Theoretical result): the unique minimizerhigh probability
============= the truesolution;
The convex problem is expensive to solve;
1A. Ahmed, B. Recht, and J. Romberg, Blind deconvolution using convexprogramming, IEEE Transactions on Information Theory, 60:1711-1732, 2014
Wen Huang Rice University
Riemannian Optimization
Find h,m, s. t. y = diag(Bhm∗C∗);
-
34/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Related work
Li et al. [LLSW16]2
Nonconvex problem3:
min(h,m)∈CK×CN
‖y − diag(Bhm∗C∗)‖22;
(Theoretical result):
A good initialization
(Wirtinger flow method + a good initialization)high probability
============⇒the true solution;
Lower successful recovery probability than alternating minimizationalgorithm empirically.
2X. Li et. al., Rapid, robust, and reliable blind deconvolution via nonconvexoptimization, preprint arXiv:1606.04933, 2016
3The penalty in the cost function is not added for simplicityWen Huang Rice University
Riemannian Optimization
Find h,m, s. t. y = diag(Bhm∗C∗);
-
34/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Related work
Li et al. [LLSW16]2
Nonconvex problem3:
min(h,m)∈CK×CN
‖y − diag(Bhm∗C∗)‖22;
(Theoretical result):
A good initialization
(Wirtinger flow method + a good initialization)high probability
============⇒the true solution;
Lower successful recovery probability than alternating minimizationalgorithm empirically.
2X. Li et. al., Rapid, robust, and reliable blind deconvolution via nonconvexoptimization, preprint arXiv:1606.04933, 2016
3The penalty in the cost function is not added for simplicityWen Huang Rice University
Riemannian Optimization
Find h,m, s. t. y = diag(Bhm∗C∗);
-
34/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Related work
Li et al. [LLSW16]2
Nonconvex problem3:
min(h,m)∈CK×CN
‖y − diag(Bhm∗C∗)‖22;
(Theoretical result):
A good initialization
(Wirtinger flow method + a good initialization)high probability
============⇒the true solution;
Lower successful recovery probability than alternating minimizationalgorithm empirically.
2X. Li et. al., Rapid, robust, and reliable blind deconvolution via nonconvexoptimization, preprint arXiv:1606.04933, 2016
3The penalty in the cost function is not added for simplicityWen Huang Rice University
Riemannian Optimization
Find h,m, s. t. y = diag(Bhm∗C∗);
-
35/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Manifold Approach
The problem is defined on the set of rank-one matrices (denoted byCK×N1 ), neither CK×N nor CK × CN ; Why not work on the manifolddirectly?
A representative Riemannian method: Riemannian steepest descentmethod (RSD)
A good initialization
(RSD + the good initialization)high probability
============⇒ the true solution;
The Riemannian Hessian at the true solution is well-conditioned;
Wen Huang Rice University
Riemannian Optimization
Find h,m, s. t. y = diag(Bhm∗C∗);
-
35/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Manifold Approach
The problem is defined on the set of rank-one matrices (denoted byCK×N1 ), neither CK×N nor CK × CN ; Why not work on the manifolddirectly?
A representative Riemannian method: Riemannian steepest descentmethod (RSD)
A good initialization
(RSD + the good initialization)high probability
============⇒ the true solution;
The Riemannian Hessian at the true solution is well-conditioned;
Wen Huang Rice University
Riemannian Optimization
Find h,m, s. t. y = diag(Bhm∗C∗);
-
36/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Manifold Approach
The problem is defined on the set of rank-one matrices (denoted byCK×N1 ), neither CK×N nor CK × CN ; Why not work on the manifolddirectly?
Optimization on manifolds: A Riemannian steepest descent method;
Representation of CK×N1 ;
Representation of directions (tangent vectors);
Riemannian metric;
Riemannian gradient;
Wen Huang Rice University
Riemannian Optimization
Find h,m, s. t. y = diag(Bhm∗C∗);
-
37/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
A Representation of CK×N1 : CK∗ × CN∗ /C∗Given X ∈ CK×N1 , there exists (h,m), h 6= 0 and m 6= 0 such thatX = hm∗;
(h,m) is not unique;
The equivalent class: [(h,m)] = {(ha,ma−∗) | a 6= 0};Quotient manifold: CK∗ × CN∗ /C∗ = {[(h,m)] | (h,m) ∈ CK∗ × CN∗ }
M = CK∗ × CN∗
(h,m)
E = CK × CN M = CK∗ × CN∗ /C∗
[(h,m)]
CK∗ × CN∗ /C∗ ' CK×N1
Wen Huang Rice University
Riemannian Optimization
-
37/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
A Representation of CK×N1 : CK∗ × CN∗ /C∗Given X ∈ CK×N1 , there exists (h,m), h 6= 0 and m 6= 0 such thatX = hm∗;
(h,m) is not unique;
The equivalent class: [(h,m)] = {(ha,ma−∗) | a 6= 0};Quotient manifold: CK∗ × CN∗ /C∗ = {[(h,m)] | (h,m) ∈ CK∗ × CN∗ }
M = CK∗ × CN∗
(h,m)
E = CK × CN M = CK∗ × CN∗ /C∗
[(h,m)]
CK∗ × CN∗ /C∗ ' CK×N1Wen Huang Rice University
Riemannian Optimization
-
38/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
A Representation of CK×N1 : CK∗ × CN∗ /C∗Cost function4
Riemannian approach:
f : CK∗ × CN∗ /C∗ → R : [(h,m)] 7→ ‖y − diag(Bhm∗C∗)‖22.
Approach in [LLSW16]:
f : CK × CN → R : (h,m) 7→ ‖y − diag(Bhm∗C∗)‖22.
M = CK∗ × CN∗
(h,m)
E = CK × CN M = CK∗ × CN∗ /C∗
[(h,m)]
4The penalty in the cost function is not added for simplicity.Wen Huang Rice University
Riemannian Optimization
-
39/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Representation of directions on CK∗ × CN∗ /C∗
M
x
ξx
η↑x
TxME = CK × CNM
[x ]
y
z
[y ]
[z ]
η[x ]
x denotes (h,m);
Green line: the tangent space of [x ];
Red line (horizontal space at x): orthogonal to the green line;
Horizontal space at x : a representation of the tangent space of M at [x ];Wen Huang Rice University
Riemannian Optimization
-
40/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Retraction
Euclidean Riemannianxk+1 = xk + αk dk xk+1 = Rxk (αkηk )
Retraction: R : TM→M
R(0[x]) = [x ]
dR(tη[x])
dt |t=0 = η[x];
Retraction on CK∗ ×CN∗ /C∗:
R[(h,m)](η[(h,m)]) = [(h + ηh,m + ηm)] .
M
xη
TxM
Rx(η)R̃x(η)
Two retractions:R and R̃
Wen Huang Rice University
Riemannian Optimization
-
41/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
A Riemannian metric
Riemannian metric:Inner product on tangent spacesDefine angles and lengths
M
Riemannian metric g1
M
Riemannian metric g2
Figure: Changing metric may influence the difficulty of a problem.
Wen Huang Rice University
Riemannian Optimization
-
42/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
A Riemannian metric
Idea for choosing a Riemannian metric
The block diagonal terms in the Euclidean Hessian are used to choosethe Riemannian metric.
Let 〈u, v〉2 = Re(trace(u∗v)):1
2〈ηh,Hessh f [ξh]〉2 = 〈diag(Bηhm∗C∗), diag(Bξhm∗C∗)〉2 ≈ 〈ηhm∗, ξhm∗〉2
1
2〈ηm,Hessm f [ξm]〉2 = 〈diag(Bhη∗mC∗), diag(Bhξ∗mC∗)〉2 ≈ 〈hη∗m, hξ∗m〉2,
where ≈ can be derived from some assumptions;The Riemannian metric:
g(η[x], ξ[x]
)= 〈ηh, ξhm∗m〉2 + 〈η∗m, ξ∗mh∗h〉2;
Wen Huang Rice University
Riemannian Optimization
min[(h,m)] ‖y − diag(Bhm∗C∗)‖22
-
42/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
A Riemannian metric
Idea for choosing a Riemannian metric
The block diagonal terms in the Euclidean Hessian are used to choosethe Riemannian metric.
Let 〈u, v〉2 = Re(trace(u∗v)):1
2〈ηh,Hessh f [ξh]〉2 = 〈diag(Bηhm∗C∗), diag(Bξhm∗C∗)〉2 ≈ 〈ηhm∗, ξhm∗〉2
1
2〈ηm,Hessm f [ξm]〉2 = 〈diag(Bhη∗mC∗), diag(Bhξ∗mC∗)〉2 ≈ 〈hη∗m, hξ∗m〉2,
where ≈ can be derived from some assumptions;
The Riemannian metric:
g(η[x], ξ[x]
)= 〈ηh, ξhm∗m〉2 + 〈η∗m, ξ∗mh∗h〉2;
Wen Huang Rice University
Riemannian Optimization
min[(h,m)] ‖y − diag(Bhm∗C∗)‖22
-
42/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
A Riemannian metric
Idea for choosing a Riemannian metric
The block diagonal terms in the Euclidean Hessian are used to choosethe Riemannian metric.
Let 〈u, v〉2 = Re(trace(u∗v)):1
2〈ηh,Hessh f [ξh]〉2 = 〈diag(Bηhm∗C∗), diag(Bξhm∗C∗)〉2 ≈ 〈ηhm∗, ξhm∗〉2
1
2〈ηm,Hessm f [ξm]〉2 = 〈diag(Bhη∗mC∗), diag(Bhξ∗mC∗)〉2 ≈ 〈hη∗m, hξ∗m〉2,
where ≈ can be derived from some assumptions;The Riemannian metric:
g(η[x], ξ[x]
)= 〈ηh, ξhm∗m〉2 + 〈η∗m, ξ∗mh∗h〉2;
Wen Huang Rice University
Riemannian Optimization
min[(h,m)] ‖y − diag(Bhm∗C∗)‖22
-
43/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Riemannian gradient
Riemannian gradient
A tangent vector: grad f([x ]) ∈ T[x]M;
Satisfies: Df ([x ])[η[x]] = g(grad f ([x ]), η[x]), ∀η[x] ∈ T[x]M;
Represented by a vector in a horizontal space;
Riemannian gradient:
(grad f ([(h,m)]))↑(h,m) = Proj(∇hf (h,m)(m∗m)−1,∇mf (h,m)(h∗h)−1
);
Wen Huang Rice University
Riemannian Optimization
-
44/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
A Riemannian steepest descent method (RSD)
An implementation of a Riemannian steepest descent method5
0 Given (h0,m0), step size α > 0, and set k = 0
1 dk = ‖hk‖2‖mk‖2, hk ←√
dkhk‖hk‖2 ; mk ←
√dk
mk‖mk‖2 ;
2 (hk+1,mk+1) = (hk ,mk )− α(∇hk f (hk ,mk )
dk,∇mk f (hk ,mk )
dk
);
3 If not converge, goto Step 2.
Wirtinger flow Method in [LLSW16]
0 Given (h0,m0), step size α > 0, and set k = 0
1 (hk+1,mk+1) = (hk ,mk )− α (∇hk f (hk ,mk ),∇mk f (hk ,mk ));2 If not converge, goto Step 2.
5The penalty in the cost function is not added for simplicityWen Huang Rice University
Riemannian Optimization
-
44/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
A Riemannian steepest descent method (RSD)
An implementation of a Riemannian steepest descent method5
0 Given (h0,m0), step size α > 0, and set k = 0
1 dk = ‖hk‖2‖mk‖2, hk ←√
dkhk‖hk‖2 ; mk ←
√dk
mk‖mk‖2 ;
2 (hk+1,mk+1) = (hk ,mk )− α(∇hk f (hk ,mk )
dk,∇mk f (hk ,mk )
dk
);
3 If not converge, goto Step 2.
Wirtinger flow Method in [LLSW16]
0 Given (h0,m0), step size α > 0, and set k = 0
1 (hk+1,mk+1) = (hk ,mk )− α (∇hk f (hk ,mk ),∇mk f (hk ,mk ));2 If not converge, goto Step 2.
5The penalty in the cost function is not added for simplicityWen Huang Rice University
Riemannian Optimization
-
44/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
A Riemannian steepest descent method (RSD)
An implementation of a Riemannian steepest descent method5
0 Given (h0,m0), step size α > 0, and set k = 0
1 dk = ‖hk‖2‖mk‖2, hk ←√
dkhk‖hk‖2 ; mk ←
√dk
mk‖mk‖2 ;
2 (hk+1,mk+1) = (hk ,mk )− α(∇hk f (hk ,mk )
dk,∇mk f (hk ,mk )
dk
);
3 If not converge, goto Step 2.
Wirtinger flow Method in [LLSW16]
0 Given (h0,m0), step size α > 0, and set k = 0
1 (hk+1,mk+1) = (hk ,mk )− α (∇hk f (hk ,mk ),∇mk f (hk ,mk ));2 If not converge, goto Step 2.
5The penalty in the cost function is not added for simplicityWen Huang Rice University
Riemannian Optimization
-
45/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Penalty
Penalty term for (i) Riemannian method, (ii) Wirtinger flow [LLSW16]
(i): ρL∑
i=1
G0
(L|b∗i h|2‖m‖22
8d2µ2
)
(ii): ρ
[G0
(‖h‖222d
)+ G0
(‖m‖22
2d
)+
L∑i=1
G0
(L|b∗i h|2
8dµ2
)],
where G0(t) = max(t − 1, 0)2, [b1b2 . . . bL]∗ = B.The first two terms in (ii) penalize large values of ‖h‖2 and ‖m‖2;
The other terms promote a small coherence;The one in (i) is defined in the quotient space whereas the one in(ii) is not.
Wen Huang Rice University
Riemannian Optimization
-
45/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Penalty
Penalty term for (i) Riemannian method, (ii) Wirtinger flow [LLSW16]
(i): ρL∑
i=1
G0
(L|b∗i h|2‖m‖22
8d2µ2
)
(ii): ρ
[G0
(‖h‖222d
)+ G0
(‖m‖22
2d
)+
L∑i=1
G0
(L|b∗i h|2
8dµ2
)],
where G0(t) = max(t − 1, 0)2, [b1b2 . . . bL]∗ = B.The first two terms in (ii) penalize large values of ‖h‖2 and ‖m‖2;The other terms promote a small coherence;The one in (i) is defined in the quotient space whereas the one in(ii) is not.
Wen Huang Rice University
Riemannian Optimization
-
46/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Penalty/Coherence
Coherence is defined as
µ2h =L‖Bh‖2∞‖h‖22
=L max
(|b∗1 h|2, |b∗2 h|2, . . . , |b∗Lh|2
)‖h‖22
;
Coherence at the true solution [(h],m])]
influences the probability of recovery
Small coherence is preferred
Wen Huang Rice University
Riemannian Optimization
-
47/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Penalty
Promote low coherence:
ρ
L∑i=1
G0
(L|b∗i h|2‖m‖22
8d2µ2
),
where G0(t) = max(t − 1, 0)2;
‖y − diag(Bhm ∗ C∗)‖22 ‖y − diag(Bhm ∗ C∗)‖22 + penalty
Initial point
Wen Huang Rice University
Riemannian Optimization
-
48/64
Riemannian Optimization Blind deconvolution Summary
Problem Statement and Methods
Initialization
Initialization method [LLSW16]
(d , h̃0, m̃0): SVD of B∗ diag(y)C ;
h0 = argminz ‖z −√
dh̃0‖22, subject to√
L‖Bz‖∞ ≤ 2√
dµ;
m0 =√
dm̃0;
Initial iterate [(h0,m0)];
Wen Huang Rice University
Riemannian Optimization
-
49/64
Riemannian Optimization Blind deconvolution Summary
Numerical and Theoretical Results
Numerical Results
Synthetic tests
Efficiency
Probability of successful recovery
Image deblurring
Kernels with known supports
Motion kernel with unknown supports
Wen Huang Rice University
Riemannian Optimization
-
50/64
Riemannian Optimization Blind deconvolution Summary
Numerical and Theoretical Results
Synthetic tests
The matrix B is the first K column of the unitary DFT matrix;
The matrix C is a Gaussian random matrix;
The measurement y = diag(Bh]m∗]C∗), where entries in the true
solution (h],m]) are drawn from Gaussian distribution;
All tested algorithms use the same initial point;
Stop when ‖y − diag(Bhm∗C∗)‖2/‖y‖2 ≤ 10−8;
Wen Huang Rice University
Riemannian Optimization
-
51/64
Riemannian Optimization Blind deconvolution Summary
Numerical and Theoretical Results
Efficiency
Table: Comparisons of efficiency
L = 400,K = N = 50 L = 600,K = N = 50Algorithms [LLSW16] [LWB13] R-SD [LLSW16] [LWB13] R-SDnBh/nCm 351 718 208 162 294 122
nFFT 870 1436 518 401 588 303RMSE 2.22−8 3.67−8 2.20−8 1.48−8 2.34−8 1.42−8
An average of 100 random runs
nBh/nCm: the numbers of Bh and Cm multiplication operations respectively
nFFT: the number of Fourier transform
RMSE: the relative error‖hm∗−h]m
∗] ‖F
‖h]‖2‖m]‖2
[LLSW16]: X. Li et. al., Rapid, robust, and reliable blind deconvolution via nonconvex optimization, preprint arXiv:1606.04933, 2016[LWB13]: K. Lee et. al., Near Optimal Compressed Sensing of a Class of Sparse Low-Rank Matrices via Sparse Power Factorization
preprint arXiv:1312.0525, 2013
Wen Huang Rice University
Riemannian Optimization
min ‖y − diag(Bhm∗C∗)‖22
-
52/64
Riemannian Optimization Blind deconvolution Summary
Numerical and Theoretical Results
Probability of successful recovery
Success if‖hm∗−h]m∗] ‖F‖h]‖2‖m]‖2 ≤ 10
−2
1 1.5 2 2.5
L/(K+N)
0
0.2
0.4
0.6
0.8
1
Pro
b. o
f Suc
c. R
ec.
Transition curve
[LLSW16][LWB13]R-SD
Figure: Empirical phase transition curves for 1000 random runs.
[LLSW16]: X. Li et. al., Rapid, robust, and reliable blind deconvolution via nonconvex optimization, preprint arXiv:1606.04933, 2016[LWB13]: K. Lee et. al., Near Optimal Compressed Sensing of a Class of Sparse Low-Rank Matrices via Sparse Power Factorization
preprint arXiv:1312.0525, 2013
Wen Huang Rice University
Riemannian Optimization
-
53/64
Riemannian Optimization Blind deconvolution Summary
Numerical and Theoretical Results
Image deblurring
Image [WBX+07]: 1024-by-1024 pixels
Wen Huang Rice University
Riemannian Optimization
-
54/64
Riemannian Optimization Blind deconvolution Summary
Numerical and Theoretical Results
Image deblurring with various kernels
Figure: Left: Motion kernel by Matlab function “fspecial(’motion’, 50, 45)”;Middle: Kernel like function “sin”; Right: Gaussian kernel with covariance[1, 0.8; 0.8, 1];
Wen Huang Rice University
Riemannian Optimization
-
55/64
Riemannian Optimization Blind deconvolution Summary
Numerical and Theoretical Results
Image deblurring with various kernels
What subspaces are the two unknown signals in?
Image is approximately sparse in the Haarwavelet basis
Use the blurred image to learn the dominatedbasis vectors: C.
Support of the blurring kernel is learned fromthe blurred image
Suppose the supports of the blurring kernelsare known: B.
L = 1048576, N = 20000, Kmotion = 109,Ksin = 153, KGaussian = 181;
Wen Huang Rice University
Riemannian Optimization
min ‖y − diag(Bhm∗C∗)‖22
-
55/64
Riemannian Optimization Blind deconvolution Summary
Numerical and Theoretical Results
Image deblurring with various kernels
What subspaces are the two unknown signals in?
Image is approximately sparse in the Haarwavelet basis
Use the blurred image to learn the dominatedbasis vectors: C.
Support of the blurring kernel is learned fromthe blurred image
Suppose the supports of the blurring kernelsare known: B.
L = 1048576, N = 20000, Kmotion = 109,Ksin = 153, KGaussian = 181;
Wen Huang Rice University
Riemannian Optimization
min ‖y − diag(Bhm∗C∗)‖22
-
55/64
Riemannian Optimization Blind deconvolution Summary
Numerical and Theoretical Results
Image deblurring with various kernels
What subspaces are the two unknown signals in?
Image is approximately sparse in the Haarwavelet basis
Use the blurred image to learn the dominatedbasis vectors: C.
Support of the blurring kernel is learned fromthe blurred image
Suppose the supports of the blurring kernelsare known: B.
L = 1048576, N = 20000, Kmotion = 109,Ksin = 153, KGaussian = 181;
Wen Huang Rice University
Riemannian Optimization
min ‖y − diag(Bhm∗C∗)‖22
-
55/64
Riemannian Optimization Blind deconvolution Summary
Numerical and Theoretical Results
Image deblurring with various kernels
What subspaces are the two unknown signals in?
Image is approximately sparse in the Haarwavelet basis
Use the blurred image to learn the dominatedbasis vectors: C.
Support of the blurring kernel is learned fromthe blurred image
Suppose the supports of the blurring kernelsare known: B.
L = 1048576, N = 20000, Kmotion = 109,Ksin = 153, KGaussian = 181;
Wen Huang Rice University
Riemannian Optimization
min ‖y − diag(Bhm∗C∗)‖22
-
56/64
Riemannian Optimization Blind deconvolution Summary
Numerical and Theoretical Results
Image deblurring with various kernels
Figure: The number of iterations is 80; Computational times are about 48s;
Relative errors∥∥∥ŷ − ‖y‖‖yf ‖yf ∥∥∥ /‖ŷ‖ are 0.038, 0.040, and 0.089 from left to right.
Wen Huang Rice University
Riemannian Optimization
-
57/64
Riemannian Optimization Blind deconvolution Summary
Numerical and Theoretical Results
Image deblurring with unknown supports
Figure: Top: reconstructed image using the exact support; Bottom: estimatedsupports with the numbers of nonzero entries: K1 = 183, K2 = 265, K3 = 351,and K4 = 441;
Wen Huang Rice University
Riemannian Optimization
-
58/64
Riemannian Optimization Blind deconvolution Summary
Numerical and Theoretical Results
Image deblurring with unknown supports
Figure: Relative errors∥∥∥ŷ − ‖y‖‖yf ‖yf ∥∥∥ /‖ŷ‖ are 0.044, 0.048, 0.052, and 0.067
from left to right.
Wen Huang Rice University
Riemannian Optimization
-
59/64
Riemannian Optimization Blind deconvolution Summary
Summary
Introduced the framework of Riemannian optimization
Used applications to show the importance of Riemannianoptimization
Briefly reviewed the history of Riemannian optimization
Introduced the blind deconvolution problem
Reviewed related work
Introduced a Riemannian steepest descent method
Demonstrated the performance of the Riemannian steepest descentmethod
Wen Huang Rice University
Riemannian Optimization
-
60/64
Riemannian Optimization Blind deconvolution Summary
Thank you
Thank you!
Wen Huang Rice University
Riemannian Optimization
-
61/64
Riemannian Optimization Blind deconvolution Summary
References I
P.-A. Absil, C. G. Baker, and K. A. Gallivan.
Trust-region methods on Riemannian manifolds.Foundations of Computational Mathematics, 7(3):303–330, 2007.
P.-A. Absil, R. Mahony, and R. Sepulchre.
Optimization algorithms on matrix manifolds.Princeton University Press, Princeton, NJ, 2008.
A. Ahmed, B. Recht, and J. Romberg.
Blind deconvolution using convex programming.IEEE Transactions on Information Theory, 60(3):1711–1732, March 2014.
J. F. Cardoso and A. Souloumiac.
Blind beamforming for non-gaussian signals.IEE Proceedings F Radar and Signal Processing, 140(6):362, 1993.
H. Drira, B. Ben Amor, A. Srivastava, M. Daoudi, and R. Slama.
3D face recognition under expressions, occlusions, and pose variations.Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(9):2270–2283, 2013.
W. Huang, P.-A. Absil, and K. A. Gallivan.
A Riemannian symmetric rank-one trust-region method.Mathematical Programming, 150(2):179–216, February 2015.
Wen Huang Rice University
Riemannian Optimization
-
62/64
Riemannian Optimization Blind deconvolution Summary
References II
Wen Huang, P.-A. Absil, and K. A. Gallivan.
A Riemannian BFGS Method without Differentiated Retraction for Nonconvex Optimization Problems.SIAM Journal on Optimization, 28(1):470–495, 2018.
Wen Huang, P.-A. Absil, K. A. Gallivan, and Paul Hand.
ROPTLIB: an object-oriented C++ library for optimization on Riemannian manifolds.Technical Report FSU16-14, Florida State University, 2016.
W. Huang, K. A. Gallivan, and P.-A. Absil.
A Broyden Class of Quasi-Newton Methods for Riemannian Optimization.SIAM Journal on Optimization, 25(3):1660–1685, 2015.
W. Huang, K. A. Gallivan, Anuj Srivastava, and P.-A. Absil.
Riemannian optimization for registration of curves in elastic shape analysis.Journal of Mathematical Imaging and Vision, 54(3):320–343, 2015.DOI:10.1007/s10851-015-0606-8.
Wen Huang, K. A. Gallivan, and Xiangxiong Zhang.
Solving PhaseLift by low rank Riemannian optimization methods for complex semidefinite constraints.SIAM Journal on Scientific Computing, 39(5):B840–B859, 2017.
W. Huang and P. Hand.
Blind deconvolution by a steepest descent algorithm on a quotient manifold, 2017.In preparation.
Wen Huang Rice University
Riemannian Optimization
-
63/64
Riemannian Optimization Blind deconvolution Summary
References III
W. Huang.
Optimization algorithms on Riemannian manifolds with applications.PhD thesis, Florida State University, Department of Mathematics, 2013.
H. Laga, S. Kurtek, A. Srivastava, M. Golzarian, and S. J. Miklavcic.
A Riemannian elastic metric for shape-based plant leaf classification.2012 International Conference on Digital Image Computing Techniques and Applications (DICTA), pages 1–7, December 2012.doi:10.1109/DICTA.2012.6411702.
Xiaodong Li, Shuyang Ling, Thomas Strohmer, and Ke Wei.
Rapid, robust, and reliable blind deconvolution via nonconvex optimization.CoRR, abs/1606.04933, 2016.
K. Lee, Y. Wu, and Y. Bresler.
Near Optimal Compressed Sensing of a Class of Sparse Low-Rank Matrices via Sparse Power Factorization.pages 1–80, 2013.
Melissa Marchand, Wen Huang, Arnaud Browet, Paul Van Dooren, and Kyle A. Gallivan.
A riemannian optimization approach for role model extraction.In Proceedings of the 22nd International Symposium on Mathematical Theory of Networks and Systems, pages 58–64, 2016.
A. Srivastava, E. Klassen, S. H. Joshi, and I. H. Jermyn.
Shape analysis of elastic curves in Euclidean spaces.IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(7):1415–1428, September 2011.doi:10.1109/TPAMI.2010.184.
Wen Huang Rice University
Riemannian Optimization
-
64/64
Riemannian Optimization Blind deconvolution Summary
References IV
F. J. Theis and Y. Inouye.
On the use of joint diagonalization in blind signal processing.2006 IEEE International Symposium on Circuits and Systems, (2):7–10, 2006.
B. Vandereycken.
Low-rank matrix completion by Riemannian optimization—extended version.SIAM Journal on Optimization, 23(2):1214–1236, 2013.
S. G. Wu, F. S. Bao, E. Y. Xu, Y.-X. Wang, Y.-F. Chang, and Q.-L. Xiang.
A leaf recognition algorithm for plant classification using probabilistic neural network.2007 IEEE International Symposium on Signal Processing and Information Technology, pages 11–16, 2007.arXiv:0707.4289v1.
Xinru Yuan, Wen Huang, P.-A. Absil, and K. A. Gallivan.
A Riemannian quasi-newton method for computing the Karcher mean of symmetric positive definite matrices.Technical Report FSU17-02, Florida State University, 2017.www.math.fsu.edu/ whuang2/papers/RMKMSPDM.htm.
Wen Huang Rice University
Riemannian Optimization
Riemannian OptimizationProblem Statement and MotivationsOptimization Framework and History
Blind deconvolutionProblem Statement and MethodsNumerical and Theoretical Results
Summary