Fast Sinkhorn Filters: Using Matrix Scaling for Non-Rigid ...

11
HAL Id: hal-03184936 https://hal.archives-ouvertes.fr/hal-03184936v2 Submitted on 17 Apr 2021 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Fast Sinkhorn Filters: Using Matrix Scaling for Non-Rigid Shape Correspondence with Functional Maps Gautam Pai, Jing Ren, Simone Melzi, Peter Wonka, Maks Ovsjanikov To cite this version: Gautam Pai, Jing Ren, Simone Melzi, Peter Wonka, Maks Ovsjanikov. Fast Sinkhorn Filters: Us- ing Matrix Scaling for Non-Rigid Shape Correspondence with Functional Maps. CVPR, Jun 2021, Nashville (virtual), United States. hal-03184936v2

Transcript of Fast Sinkhorn Filters: Using Matrix Scaling for Non-Rigid ...

Page 1: Fast Sinkhorn Filters: Using Matrix Scaling for Non-Rigid ...

HAL Id: hal-03184936https://hal.archives-ouvertes.fr/hal-03184936v2

Submitted on 17 Apr 2021

HAL is a multi-disciplinary open accessarchive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come fromteaching and research institutions in France orabroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, estdestinée au dépôt et à la diffusion de documentsscientifiques de niveau recherche, publiés ou non,émanant des établissements d’enseignement et derecherche français ou étrangers, des laboratoirespublics ou privés.

Fast Sinkhorn Filters: Using Matrix Scaling forNon-Rigid Shape Correspondence with Functional Maps

Gautam Pai, Jing Ren, Simone Melzi, Peter Wonka, Maks Ovsjanikov

To cite this version:Gautam Pai, Jing Ren, Simone Melzi, Peter Wonka, Maks Ovsjanikov. Fast Sinkhorn Filters: Us-ing Matrix Scaling for Non-Rigid Shape Correspondence with Functional Maps. CVPR, Jun 2021,Nashville (virtual), United States. �hal-03184936v2�

Page 2: Fast Sinkhorn Filters: Using Matrix Scaling for Non-Rigid ...

Fast Sinkhorn Filters: Using Matrix Scaling for Non-Rigid ShapeCorrespondence with Functional Maps

Gautam Pai1 Jing Ren2 Simone Melzi3 Peter Wonka2 Maks Ovsjanikov1

1LIX, Ecole Polytechnique, IP Paris 2KAUST 3Sapienza University of Rome

Abstract

In this paper, we provide a theoretical foundation forpointwise map recovery from functional maps and highlightits relation to a range of shape correspondence methodsbased on spectral alignment. With this analysis in hand,we develop a novel spectral registration technique: FastSinkhorn Filters, which allows for the recovery of accu-rate and bijective pointwise correspondences with a su-perior time and memory complexity in comparison to ex-isting approaches. Our method combines the simple andconcise representation of correspondence using functionalmaps with the matrix scaling schemes from computationaloptimal transport. By exploiting the sparse structure of thekernel matrices involved in the transport map computation,we provide an efficient trade-off between acceptable accu-racy and complexity for the problem of dense shape corre-spondence, while promoting bijectivity. 1

1. IntroductionNon-rigid shape matching remains at the core of many

computer vision tasks including statistical shape analysis[4], texture mapping [10], and deformation transfer, [41],among others.

Among many existing approaches for this problem [43,38], a prominent overall strategy is to exploit spectral quan-tities, such as the eigenfunctions of the Laplace-Beltramioperator, which are naturally invariant to isometric shapedeformations. Within this category, the functional mapframework, introduced in [28] proposes an efficient way torepresent and compute mappings and achieves the state-of-the-art accuracy in difficult shape matching problems [23].

One of the advantages of this framework is that it al-lows to formulate shape correspondence as a simple opti-mization problem relating the basis functions on the twoshapes, from which a dense point-to-point correspondencecan be extracted. This framework has been successfully ap-

1Demo Code: https://github.com/paigautam/CVPR21_FastSinkhornFilters

Source Functional Map Adjoint +Fast Sinkhorn Filter

Figure 1: We formally justify pointwise map recoveryfrom functional maps using the adjoint, and introduce FastSinkhorn Filters: an efficient method promoting bijectivityin this process. This yields better pointwise maps (colortransfer) with improvement in bijectivity (errors in red).

plied in both axiomatic [28, 16, 14, 32] and learning-based[18, 35, 13, 8] settings. However, one of the recurring issuesof virtually all works in this domain, is defining and usingthe exact relation between functional and pointwise maps.This problem is especially relevant in the conversion stepfrom functional to point-to-point correspondences. Thisconversion has been specifically treated in several works,including [28, 33, 11, 34, 15, 45, 23] among others. De-spite this significant effort, the precise rigorous relation be-tween the two representations still remains ill-defined. Inthis paper, we provide a rigorous theoretical justificationfor pointwise conversion from a functional map, and dis-cuss how it is related to the problem of aligning the spectralembeddings of non-rigid shapes. We highlight that unlike afunctional map, which is not well-suited for pointwise con-version, the adjoint operator can naturally be used for point-to-point map extraction both theoretically in the smooth set-ting, and in practice on discrete shapes.

With this foundation in hand, we propose a generalframework for iterative spectral alignment, and illustratethat many previous shape matching methods are regular-ized variants of our meta algorithm. Finally, we introducean effective regularized procedure using Sinkhorn’s algo-rithm to compute accurate near-bijective pointwise corre-spondences that can scale to densely-sampled shapes. Wefind that existing approaches do not allow the recovery ofan accurate, smooth and bijective point-to-point correspon-dence with acceptable time and memory complexity. Suchmethods are either too inaccurate (e.g., nearest-neighbor),

Page 3: Fast Sinkhorn Filters: Using Matrix Scaling for Non-Rigid ...

or too time and memory consuming (e.g., linear-assignmentsolvers) thereby making them infeasible for practical appli-cations.

We use our analysis of spectral alignment to constructa sparse kernel assignment matrix which is then efficientlyprocessed using matrix scaling to output an entropic regu-larized transport plan. Finally by extracting the maximumlikelihood estimate of this plan, we demonstrate that our ap-proach, termed Fast Sinkorn Filters, produces accurate re-sults often at a fraction of the cost of existing methods inboth direct and iterative pointwise conversion applications.

2. Related Work

Shape matching is a very well-studied area of computervision and computer graphics and its full overview is be-yond the scope of our paper. Below we review the workmost closely related to ours and focus primarily on the func-tional maps framework. We refer the interested readers torecent surveys including [43, 42, 3, 38] for an in-depth treat-ment of other shape matching approaches.

Functional Maps Our work focuses primarily on thefunctional map framework, which was introduced in [28]for solving near-isometric shape correspondence problems,and extended in many follow-up works, including [16, 1,17, 32, 11, 6] among others (see also [29] for a generaloverview). The key idea in these techniques is to estimatelinear transformations between spaces of real-valued func-tions, represented in a reduced functional basis. This linearstructure implies that functional maps can be convenientlyencoded as small matrices and optimized for using standardlinear algebraic techniques.

In addition to the convenience of the representation it-self, it has been observed by several works in this do-main that many natural properties on the underlying point-wise correspondences can be expressed as objectives onfunctional maps [16, 36, 32, 6]. For example, orthonor-mal functional map matrices correspond to locally volumepreserving maps [28, 16, 36], near isometries must re-sult in functional maps that commute with the Laplacian[28, 46, 32, 20, 19], while conformal maps must preservecertain functional inner products [36, 6, 47].

These results typically assume that the functional mapis induced as the pull-back of some underlying point-to-point correspondence. However, the space of linear func-tional transformations is strictly larger, which means thatadditional regularization is required. In [27, 26] the rela-tion between pointwise and functional maps was studiedand the authors proposed an optimization term [27] aimedto promote only functional maps arising from point-to-pointones. That work still used the default conversion schemefrom [28], however.

Functional Map Conversion More closely related toours are works that directly consider the question of point-to-point correspondence recovery from functional maps.This step is instrumental in all functional maps-based cor-respondence methods. As we highlight below, the originalmethod [28] suggested a recovery technique based on con-sidering images of indicator functions at points and then anefficient method based on iterative closest point in the spec-tral domain. Unfortunately, no justification or analysis wasprovided for whether this procedure has any analogue in thesmooth setting.

Several follow-up works noted that the conversion stepcan have a fundamental limiting effect on the accuracy ofthe recovered maps [33, 11, 34, 15, 45, 23]. This has ledto algorithms that incorporate smoothness using CoherentPoint Drift in the spectral domain [33], penalizing spurioushigh-frequencies during conversion [10] and using higher-order objectives such as maximizing kernel density [44, 45],among others. Nevertheless, despite this significant effortthe fundamental question of the relation between functionaland pointwise maps in the smooth setting (i.e., indepen-dently of the shape discretization) remains open. This isunfortunate, as for example, discrete differential geometryoperators [24] such as the Laplacian are discretized pre-cisely using principles from the smooth manifolds, whichcontributes to their robustness to domain changes.

Interestingly, recent learning-based methods have alsohighlighted the importance of both robust discretization-insensitive conversion [18, 8, 35, 13] and of enforcing cor-rect losses during training using either functional [35] orpoint-to-point correspondences [18, 12, 13]. In the lattercategory, conversion between functional and point-to-pointmaps is done as a non-learned layer in the network and thusmust be correctly and consistently defined. The role of theadjoint was considered very recently in a learning context[22] although that work did not address standard functionalmap conversion nor draw links to existing methods.

We also note that several works have studied the impor-tance of iterative conversion between pointwise and func-tional (or, more broadly, probabilistic) correspondences[23, 31, 44, 45]. As the conversion step is performed re-peatedly within these approaches, it strongly contributes tothe overall final accuracy [23].

Due to the ubiquitous nature of the conversion betweenfunctional and point-to-point correspondences, our analy-sis has direct implications in all of these scenarios. Aswe demonstrate below, the conversion that we consider andthe resulting spectral alignment methods, while based onthe same underlying principles as in some previous works[28, 10, 15], is theoretically better justified, more robust andresults in practical improvement especially in challengingcases of shapes with different discretizations. Specifically,we remark that earlier approaches for pointwise conversion

Page 4: Fast Sinkhorn Filters: Using Matrix Scaling for Non-Rigid ...

like [28, 15] lack a formal discussion of delta functions andthe importance of combining them with adjoint operatorsas we show in Theorem 1. Hence the pointwise conver-sion schemes in these methods are purely heuristic and ul-timately rely on the same recovery approach as [28]. Addi-tionally, we provide a proof of optimal spectral alignmentwhich is essential to justify the application of regularisedoptimal transport in order to enable a fast, accurate and bi-jective pointwise conversion.

Optimal Transport We also note briefly that optimaltransport is another widely-used relaxation for matchingproblems [40, 21, 45, 9]. Our use of Sinkhorn algorithm isdirectly inspired by advances in this area [7, 39]. Moreover,as we demonstrate below, existing techniques that use theformalism of optimal transport for solving assignment prob-lems including the Product Manifold Filter with the heatkernels [21, 45] fall within the general spectral alignmentformalism that we study. However, in contrast to these ex-isting methods which often rely on large dense matrices likegeodesic distances or heat kernels, our formulation of theregularized transport problem is more efficient, as it only in-volves sparse matrix manimulation. Our use of Sinkhorn’salgorithm is thus both robust and provides good results evenfor non-rigid 3D shapes with non-uniform sampling.

3. Regularized Spectral AlignmentIn this section, we provide a formalism for pointwise

map conversion from functional maps, and discuss how thisis related to the problem of aligning the spectral embeddingof shapes. We then propose a general framework for iter-ative spectral alignment, and illustrate that many previousshape matching refinement methods are regularized variantsof our meta algorithm.

3.1. Background and Operators in Smooth settings

Functional maps Suppose we are given two smooth sur-faces X ,Y and a pointwise map TXY that maps a point inX to a point in Y . As introduced in [28], the functionalmap associated with the given pointwise map TXY is de-fined via pullback (denoted as TFYX ): for any real-valuedfunction f : Y → R, its image g = TFYX (f) is a real-valuedfunction on X so that g(x) = f

(TXY(x)

)for any x ∈ X .

Note the change in direction between TXY and TFYX .Although a functional map can be used directly to trans-

port real-valued functions, in most cases, we are also in-terested in how to recover the pointwise map TXY from afunctional map TFYX . In the original work [28] (Remark 4.1in Section 4), the method that is alluded to is to use TFYX tomap indicator functions, i.e. functions that equal 1 at somepoint and zero elsewhere. Unfortunately, this has a majorproblem in L2 (the space of square integrable functions),since such pointwise indicators are equivalent to the zero

function. This means that in an orthonormal basis, such asthe LB eigenfunctions, such functions will be representedas vectors of zeros. As a result, we cannot apply such amethod in practice directly.

A more principled approach can be obtained by using thefunctional map adjoint as we discuss below.

Adjoint Operator Given a functional map TFYX , the ad-joint functional map operator TAXY is a linear operator thatmaps real-valued functions on X to those on Y , and is de-fined implicitly [15], so that ∀ f : Y → R, g : X → R:

〈 TAXY(g), f 〉Y = 〈 g, TFYX (f) 〉X (1)

Here we denote with 〈 , 〉X and 〈 , 〉Y theL2 inner prod-uct for functions respectively on shape X and Y . The ad-joint always exists and is unique by the Riesz representationtheorem (see also Theorem 3.1 in [15]). Note that the ad-joint operator of functional maps has been considered, e.g.,in [15] although its role in pointwise map recovery was notexplicitly addressed in that work.

Note that the adjoint operator TAXY , unlike the functionalmap, maps the functions in the same direction as the pointwise map TXY . Besides the consistent direction, the adjointoperator has another nice property that it always maps Diracdeltas to Dirac deltas as shown in Theorem 1 below.

Importantly, Dirac deltas are not functions but are in-stead special cases of distributions, which are continu-ous linear functionals over the space of smooth square-integrable functions (also known as test functions). Thus,for a distribution d, given any smooth test function h, d(h)is a real-value. One can construct a distribution df froma square-integrable function f via integration: df (h) =∫f(x)h(x)dµ(x). For the special case of the Dirac deltas,

i.e., d = δx, we have δx(h) = h(x) for any test function h.Then by definition, we write 〈 δx, h 〉 = h(x).

Theorem 1. Let TAXY be the adjoint operator associatedwith a point-to-point mapping TXY as in Eq. (1). ThenTAXYδx = δTXY(x) for all x ∈ X .

Proof. Using Eq. (1) for any f : Y → R, we get:

〈 TAXYδx, f 〉Y = 〈 δx, TFYX f 〉X = 〈 δx, f ◦ TXY 〉X = f(TXY(x)

)Therefore, TAXYδx equals some distribution d such that〈 d, f 〉Y = f

(TXY(x)

)for any function f on Y . By unique-

ness of distributions this means that: TAXYδx = δTXY(x).

To summarize, given a pointwise map TXY , we can ob-tain a functional map TFYX . We can then define the adjointfunctional map TAXY . To recover TXY , we first define aDirac delta function δx for each point x on X and map itusing TAXY . Its image TAXYδx gives the Dirac delta functiondefined at the corresponding point TXY(x) ∈ Y as required.

Page 5: Fast Sinkhorn Filters: Using Matrix Scaling for Non-Rigid ...

3.2. Operators and Deltas in Discrete Settings

The discussion above holds in the smooth setting. Ourgoal now is to demonstrate that the same results hold in thediscrete setting and moreover leads to practical algorithmsfor pointwise map recovery.

We assume that each shape is represented as a trian-gle mesh. A smooth function f is discretized as a vectorf that is defined at each of the vertices. We also assumeto be given a symmetric mass matrix A so that the func-tional inner product 〈 f, g 〉 is discretized as fTAg (see[24]). As is commonly done in geometry processing [37],we will sometimes assume that A is a diagonal matrix ofarea-weights. While this assumption not strictly required, itsimplifies some of the calculations below.

Any pointwise map TXY from a triangle mesh X to a tri-angle mesh Y can be written as a binary matrix ΠYX of sizenX × nY (number of vertices on shapes X ,Y respectively)so that ΠYX (x, y) equals to 1 if TXY(x) = y and equalsto 0 otherwise. We can see that, ΠYX ∈ RnX×nY is a dis-crete functional map in the full basis that transfers discretefunction f ∈ RnY from Y to a function g on X via matrixmultiplication, i.e., g = ΠXY f ∈ RnX .

Discrete Dirac deltas Recall the definition of Dirac deltasin the smooth setting: < δx, h >= h(x) must hold forany test function h. Discretizing this equation on a trian-gle mesh we get: δTxAh = h(x) for any function h. Letex denote the indicator at x: i.e., a vector that equals to 1in the position corresponding to x and zeros elsewhere. Wethen get δTxAh = eTxh. Since this must hold for all h andsince A is symmetric, we get: Aδx = ex, or equivalentlyδx = A−1ex. If we assume A to be diagonal, we obtain thatδx is a vector such that: δx(y) equals to 1/A(x) if y = xand equals to 0 otherwise. Remark that δx is not the same asthe indicator (also known as the hat) function on the mesh.Instead, we must factor the area of the corresponding point.Intuitively this is because the Delta function is “responsi-ble” for the entire region around a given point.

Discrete Adjoint Operators in Full Basis As defined inthe smooth setting, the adjoint operator can be derived fromthe functional map via 〈 TAXY(g), f 〉Y = 〈 g, TFYX (f) 〉X .Similarly, the discrete adjoint operator in the full basis,ΓXY ∈ RnY×nX is a matrix that maps discrete functionsfrom X to Y , i.e., 〈 ΓXYg, f 〉Y = 〈 g,ΠYX f 〉X .

Denoting the area matrices of shapes X , Y as AX ,AY ,we have: fTAYΓXYg = fTΠTYXAXg, which must holdfor all pairs of f ,g. Therefore, we have:

ΓXY = A−1Y ΠTYXAX (2)

If AX ,AY are diagonal matrices, this leads to:

ΓXY(y, x) =

{AX (x)/AY(y) if TXY(x) = y

0 otherwise(3)

Remark that Theorem 1 also holds in the discrete setting.Specifically, for any discrete Dirac delta δx = A−1

X ex onX , we have ΓXYδx = A−1

Y ΠTYXAXA−1X ex = A−1

Y ΠTYXex =A−1Y eTXY(x) = δTXY(x) as required.

Discrete Adjoint Operators in a Reduced Basis Giventhe constructions above, it is easy to translate them to thesetting, where functions are represented through their coef-ficients in some possibly reduced basis, such as the eigen-functions of the Laplace-Beltrami operator.

Suppose we are given a basis on triangle meshes X ,Ythat we store as columns of matrices ΦX ,ΦY respec-tively. We assume that these bases are orthonormal withrespect to the corresponding mass matrices AX ,AY , i.e.,ΦTXAXΦX = Id and ΦTYAYΦY = Id. Note that the ba-sis matrice have size: ΦX ∈ RnX×kX and ΦY ∈ RnY×kY ,where kX ≤ nX , kY ≤ nY . In practice, the number of ba-sis elements k is in range of [ 50 , 200 ], while the numberof vertices n can be tens of thousands.

In the reduced basis, a functional map simply “trans-lates” the coefficients of functions expressed in the givenbasis. Specifically, following [29] we have that a func-tional map in the reduced is simply given as: CYX =Φ†XΠYXΦY . Note that CYX ∈ RkX×kY is a functionalmap in the reduced basis mapping coefficients of functionsfrom Y to X . Applying the same idea to the adjoint opera-tor, we obtain the adjoint operator in the reduced basis, i.e.,DXY = Φ†YΓXYΦX . Plugging in Eq. (2), we have

DXY= Φ−1Y ΓXYΦX = Φ†YA

−1Y ΠTYXAXΦX

= ΦTYΠTYX(Φ†X)T

= CTYX(4)

Here we used the fact that Φ†S = ΦTSAS (S = X ,Y) sincethe basis is assumed to be orthonormal. This calculationshows that D in the reduced basis is simply the transpose ofthe functional map C in the opposite direction.

We note that Theorem 1 also holds approximately inthe discrete setting with the reduced basis. Recall thatthe discrete Dirac delta on shape X in the full basis isδx = A−1

X ex. We can compute its corresponding coef-ficient vector dx w.r.t. the reduced basis ΦX , i.e., dx =Φ†XA

−1X ex = ΦTXAXA

−1X ex = ΦTXex We then have:

DXYdx=(Φ†YA

−1Y ΠTYXAXΦX

)ΦTXex

= ΦTYAYA−1Y ΠTYXAXΦXΦTXex = ΦTYΠTYXAXΦXΦTXex

≈ ΦTYΠTYXex = ΦTYeTXY(x) = dTXY(x)

Here, the approximation error comes from the functionalbasis truncation. Since ΦT

(AΦΦT − Id

)= 0 holds, ex can

be considered as an approximation for AXΦXΦTXex. Wetherefore have DXYdx ≈ dTXY(x).

In summary, in this section, we justified that the adjointoperators in the discrete setting, both in the full basis or

Page 6: Fast Sinkhorn Filters: Using Matrix Scaling for Non-Rigid ...

template

ACM Reference Format:. 2020. template. ACM Trans. Graph. 1, 1 (November 2020), 2 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn

Author’s address:

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor pro�t or commercial advantage and that copies bear this notice and the full citationon the �rst page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior speci�c permission and/or afee. Request permissions from [email protected].© 2020 Association for Computing Machinery.0730-0301/2020/11-ART $15.00https://doi.org/10.1145/nnnnnnn.nnnnnnn

X

Y

Case 01

20 40 60 80 1000

0.35

Spectral basis size

Reco

very

erro

r

Recover TXY

use adjointuse fmap

20 40 60 80 100Spectral basis size

Recover TYX

use adjointuse fmap

X

Y

Case 02

20 40 60 80 1000

0.35

Spectral basis size

Reco

very

erro

r

Recover TXY

use adjointuse fmap

20 40 60 80 100Spectral basis size

Recover TYX

use adjointuse fmap

Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Ut puruselit, vestibulum ut, placerat ac, adipiscing vitae, felis. Curabitur dic-tum gravida mauris. Nam arcu libero, nonummy eget, consectetuerid, vulputate a, magna. Donec vehicula augue eu neque. Pellentesquehabitant morbi tristique senectus et netus et malesuada fames acturpis egestas. Mauris ut leo. Cras viverra metus rhoncus sem. Nullaet lectus vestibulum urna fringilla ultrices. Phasellus eu tellus sitamet tortor gravida placerat. Integer sapien est, iaculis in, pretiumquis, viverra ac, nunc. Praesent eget sem vel leo ultrices bibendum.Aenean faucibus. Morbi dolor nulla, malesuada eu, pulvinar at, mol-lis ac, nulla. Curabitur auctor semper nulla. Donec varius orci egetrisus. Duis nibh mi, congue eu, accumsan eleifend, sagittis quis,diam. Duis eget orci sit amet orci dignissim rutrum.

Nam dui ligula, fringilla a, euismod sodales, sollicitudin vel, wisi.Morbi auctor lorem non justo. Nam lacus libero, pretium at, lobortisvitae, ultricies et, tellus. Donec aliquet, tortor sed accumsan biben-dum, erat ligula aliquet magna, vitae ornare odio metus a mi. Morbiac orci et nisl hendrerit mollis. Suspendisse ut massa. Cras nec ante.Pellentesque a nulla. Cum sociis natoque penatibus et magnis disparturient montes, nascetur ridiculus mus. Aliquam tincidunt urna.Nulla ullamcorper vestibulum turpis. Pellentesque cursus luctusmauris.

Nulla malesuada porttitor diam. Donec felis erat, congue non,volutpat at, tincidunt tristique, libero. Vivamus viverra fermentumfelis. Donec nonummy pellentesque ante. Phasellus adipiscing sem-per elit. Proin fermentum massa ac quam. Sed diam turpis, molestievitae, placerat a, molestie nec, leo. Maecenas lacinia. Nam ipsumligula, eleifend at, accumsan nec, suscipit a, ipsum. Morbi blanditligula feugiat magna. Nunc eleifend consequat lorem. Sed lacinia

ACM Trans. Graph., Vol. 1, No. 1, Article . Publication date: November 2020.

Figure 2: Top: a pair of near-isometric deformed spheres.Bottom: a pair of non-isometric deformed spheres. We com-pare the pointwise map recover error between using the ad-joint operators (Eq. (5) and (6)) and using the functionalmaps as suggested in [28].

reduced basis, lead to the same result as highlighted in The-orem 1. Specifically, the discrete adjoint operators alwaysmap the discrete Dirac deltas (coefficients) to Dirac deltas(coefficients). Note that the same claim does not hold forthe functional map itself, both because the map direction isreversed and because area elements might be different.

3.3. Pointwise Map Conversion

Overview Recall that in the smooth setting, we can al-ways convert a given pointwise map into a functional mapand then use Eq. (1) to obtain the adjoint operator; we canthen recover the original pointwise map from an adjoint op-erator as shown in Theorem 1 without any additional as-sumptions on the map (e.g., isometric or local volume pre-serving maps). This one-to-one relationship between point-wise maps and the adjoint functional map also holds exactlyin the discrete setting in both the full and reduced basis, upto basis truncation errors in the latter.

This suggests the following pipeline for computingpointwise correspondences: first we can use existing meth-ods [28, 31, 30] to compute a functional map CYX throughoptimization using some descriptors. Then the adjoint op-erator can be obtained by setting DXY = CTYX . Finally, wecan extract a pointwise map TXY from DXY by using DXYto map the coefficients of the Dirac deltas.

Map conversion We now discuss in detail how to ex-tract a pointwise map TXY from an adjoint functional mapDXY . Recall that we have DXYdx ≈ dTXY(x), which can

Algorithm 1: Iterative Meta Algorithm (IMA)Input : A pair of shapes X ,Y with basis ΦX ,ΦYOutput: Pointwise map TXY and adjoint map DXYInitialization : An initial guess of DXYwhile Not converged do

(1) Extract map TXY from DXY (e.g. Eq. (5))(2) Convert the extracted TXY to DXY (Eq. (4))

end

be equivalently written as DXYΦTXex ≈ ΦTYeTXY(x). There-fore, we have TXY(x) = NNsearch

(ΦY , e

TxΦXD

TXY), where

NNsearch(P,q

)returns the closest point (nearest neigh-

bor) in P for the query point q, where points in P are storedin rows. We then iterate through all the points x in shape Xand obtain the pointwise map:

TXY = NNsearch(ΦY ,ΦXD

TXY)

(5)We can similarly extract a pointwise map TYX in the

opposite direction from DXY via:TYX = NNsearch

(ΦXD

TXY ,ΦY

)(6)

We emphasize that this is different from the pointwise mapconversion proposed in [28] and used in follow-up worksincluding [15], where the functional map matrix CYX isused directly to transport Delta functions. As remarkedabove, this procedure, unfortunately has no justification inthe smooth setting.Synthetic example Fig. 2 shows two pairs of deformedspheres with different underlying mesh structure, and wecompare the map conversion error between using the ad-joint operators and the functional maps when transferringDelta functions. Remark that for a pair of shapes that arenear-isometric (top of Fig. 2), implying that local areas arepreserved by the underlying map, using functional maps torecover the pointwise correspondence is comparable (whilenevertheless being slightly worse) to the results of usingthe adjoint operators. However, when two shapes are farfrom isometry (bottom of Fig. 2), using the adjoint opera-tors achieves significantly better results than using the func-tional maps, which may not even converge with increasingbasis size (bottom right).

3.4. Spectral Embedding Alignment

The discussion above focuses on the functional maprepresentation and the conversion step between functionalmaps and point-to-point ones. Another, very fruitful, per-spective on the same procedure can be obtained by consid-ering the spectral embeddings of the two shapes and align-ments between them.

Given a shape X with the reduced basis ΦX we call itsspectral embedding the point cloud obtained by construct-ing a point for each row of ΦX , using each columns as coor-dinates. From Section 3.1, this is the same as constructing

Page 7: Fast Sinkhorn Filters: Using Matrix Scaling for Non-Rigid ...

Shape X

Shape Y

kX

nX

kY

nY

ΦXDTYX

ΦY

ukvk

Sparse KernelAssignment

Sinkhorn Iterations

Entropic RegularizedTransport Plan

FinalCorrespondences

(argmax)

Figure 3: Fast Sinkhorn Filters: Illustrating spectral embedding alignment using regularized optimal transport. Our pipelineinputs a set of aligned basis functions and outputs a pointwise correspondence between the shapes.

a point cloud where the coordinates of each point representthe coefficients of its delta function in the reduced basis.

Optimal linear alignment Recall from the discussionabove we have DXYdx ≈ dTXY(x). Since dx representsthe coefficients of δx in the reduced basis of shape X , andDXYdx represents the coefficients of the image in the re-duced basis, this means that the operator DXY aligns thespectral embeddings of the shape X and Y . This can alsobe written as follows: DXYΦTX ≈ (ΠYXΦY)T .

Interestingly, the adjoint operator can also be deduced asthe optimal linear transformation that aligns the spectralembeddings, given a point-to-point map. I.e.,

DXY = arg minX∥∥XΦTX −

(ΠYXΦY

)T∥∥ (7)

which can be easily demonstrated via:

X∗ = arg minX∥∥XΦTX −

(ΠYXΦY

)T∥∥= arg minX

∥∥ΦXXT −ΠYXΦY

∥∥=(Φ†XΠYXΦY

)T= CTYX = DXY

(8)

Intuitively this means that the adjoint is the optimal lin-ear operator that aligns two spectral embeddings given apointwise map. Again, we stress that the same interpreta-tion does not hold for the functional map itself, due to thereasons mentioned above.

Iterative meta algorithm (IMA) As remarked above thespectral embedding ΦX of a shape X can be interpreted asa point cloud in kX dimensional space. Further, the ad-joint functional map DXY is the optimal linear transforma-tion that aligns the spectral embeddings of X and Y given apoint-to-point map.

Since a priori we do not have access to either the point-to-point map or the functional map (or its adjoint), this sug-gests the following iterative procedure: we first estimate an

initial functional (or adjoint) map; we then iteratively ex-tract a pointwise map TXY from the adjoint map DXY , and,again, compute the adjoint map DXY from the obtainedpointwise map TXY . We call this scheme an Iterative MetaAlgorithm (summarized in Algorithm 1).

Regularized spectral alignment In practice, this sim-ple procedure does not work well though the convergenceis guaranteed for the following reasons: (1) The dimen-sionality of the spectral embedding is typically quite high.This means that there could be many local minima in thealignment energy. (2) This procedure provides no guaran-tees on what kind of map (pointwise or functional) recov-ered. For example in the full basis (i.e., when the num-ber of basis functions equals the number of points) thereis a linear transformation for any point-to-point map thataligns the spectral embeddings perfectly. Thus, this proce-dure will terminate in a single iteration. For these reasons,some additional information must be injected into this ba-sic method. For example, some existing methods such asICP [28], PMF [45], ZoomOut [23] can be cast as variantsof IMA with additional regularization.

Specifically, ICP [28] promotes orthonormal functionalmaps by projecting the singular values of the functional mapto 1 at each iteration, which correspond to locally area-preserving correspondences. PMF [45] introduces bijectiv-ity as a hard constraint for pointwise map conversion whichturns the procedure to an assignment problem and is solvedusing the auction algorithm. At the same time, effectivelyby varying the time parameter of the kernel matrices in-volved, the size of basis used for spectral embedding isprogressively increased between iterations. ZoomOut [23]promotes orthonormal functional maps for each principalsubmatrix via increasing the size of basis for the spectralembedding progressively between iterations. See appendixfor pseudo-code of the mentioned algorithms.

Page 8: Fast Sinkhorn Filters: Using Matrix Scaling for Non-Rigid ...

Algorithm 2: Fast Sinkhorn FilteringInput : ΦX ,ΦY ,DXY , Parameters λ, k0, N0

Output: Pointwise maps ΠXY and ΠYXStep1: Populate Sparse Kernel Assignment MatrixKλ ∈ RnX×nY as follows:

1. Align the spectral basis functions:FX = ΦXD

T

YX ∈ RnX×k and FY = ΦY ∈ RnY×k

2. For each row in FX , find the k0 nearest neighbors inFY and let: dij =

∥∥FX (i, :)−FY(j, :)∥∥ be the aligned

spectral distance between the ith point in shape X tothe jth point in shape Y . Set Kλ

ij = e−λ d2ij

Step2: Estimate Regularized Transport Plan PXYSet a = 1

nY1nY , b = 1

nX1nX , Initialize:u0 = a

for k = 1, 2, ...N0 dovk ← a/

(Kλ

T

uk

)#elementwise division

uk ← b/(Kλvk)

endPXY = Diag(uN0

) Kλ Diag(vN0)

Step3: Extract vertex-to-vertex mapsΠXY = argmax(PTXY),ΠYX = argmax(PXY)

4. Fast Sinkhorn FiltersWe propose to solve the spectral embedding alignment

as a linear assignment problem such that the pointwise mapΠ is a doubly stochastic matrix with search spaceQ. There-fore, we can formulate our problem as:

PXY = arg minΠ∈Q⟨d , Π

⟩F

(9)

where Q ={

Π∣∣Π ∈ RnX×nY≥0 , Π1nY = µX , ΠT1nX = µY

};

µX and µY are initial masses for X and Y predefined oneach vertex, and d ∈ R×nX×nY is a matrix of the pairwiseeuclidean distances between the aligned embeddings:

dij =∥∥ΦX (i, :)DTXY − ΦY(j, :)

∥∥ (10)

To solve the above problem efficiently, we use the well-known Sinkhorn algorithm that comes from optimal trans-port theory, which is a tool that allows the computation ofdistances between functions in a common domain. Specifi-cally, given two probability distributions in a common met-ric space, the optimal transport distance is the cumulativeeffort required to shift the mass from each location of thefirst distribution to some location in the second distribu-tion such that the linear assignment cost is minimized. Im-portantly, an adjustment to the linear assignment cost ofEq. (11) allows for a much faster and computationally supe-rior numerical solver called the Sinkhorn Algorithm. There-fore, we want to solve:

PXY = arg minΠ∈Q⟨d , Π

⟩F− λ H

(Π)

(11)

Table 1: Comparing different pointwise map conversionmethods. We measure different metrics on the recoveredmaps from 200 pairs of FAUST regular (left) and 190 pairsof FAUST remeshed (right).

MethodsAccuracy(×10−3)

Bijective(×10−3)

Coverage(%)

SmoothnessRuntime

(s)Auction 3.6 / 55 0 / 0 100 / 99 5.7 / 52.7 11.7 / 5.9

NN 9.5 / 42 8.2 / 27 75.8 / 47.8 4.8 / 6.5 0.5 / 0.2CPD 7.6 / 32 10 / 25 82.6 / 71.1 4.7 / 6.8 13.3 / 6.5

Sinkhorn 4.9 / 33 1.7 / 7 93.6 / 79.7 5.5 / 11.1 2.1 / 1.4

where H(Π)

= −∑i,j

πij log(πij). We can see that when λ =

0, this problem is equivalent to the original spectral align-ment problem Eq. (9). As discussed in [39], with nonzeroλ, −H

(Π)

makes the energy Eq. (11) strictly convex, andtherefore a unique minimizer exists.

The input to a regularized transportation problem is anX -by-nY distance matrix. We first compute a kernel as-signment matrix from the input pairwise distances:

Kλ =[kλij = e−

1λd

2ij

]i=1,2..nX , j=1,2..nY

(12)

where the distances dij are defined in Eq. (10). This ma-trix is then iteratively subject to a matrix scaling procedureleading to a regularized transport plan. See Figure 3.

Note that computing and storing a distance matrix of di-mension nX × nY can be a very time and memory con-suming task, especially when the resolution of the shapes isvery large. In addition, the value of the kernel in Eq. (12)is significant only for distances that are quite small. Takingadvantage of this simple insight, we modify the kernel func-tion of Eq. (12) and construct a sparse kernel assignmentmatrix that is populated by the distance values for only a fewnearest neighbors for each point:Kλ =

[kλij]i=1,2..nX , j∈N{i}

.See Algorithm 2 for a detailed outline. Our construction

of the sparse kernel in Sinkhorn iterations leads to improvedaccuracy and bijectivity without incurring a large runtime.Therefore our solution is a competitive combination of fast,accurate and bijective simultaneously which is in contrastto prior pointwise registration methods as shown in Table 1.

5. Experiments and EvaluationWe evaluate the different pointwise conversion algo-

rithms: Nearest Neighbor, Auction [2], Coherent Point Drift[25], and our Sinkhorn Filter on 200 pairs of FAUST [5] and190 test pairs of the FAUST remeshed datasets [31]. Theauction algorithm [2] solves for a permutation matrix forthe linear assignment problem. The coherent point drift al-gorithm is a point set registration technique that models thecorrespondence as a probability density which is optimizedvia expectation maximization [25, 33, 34]. All 4 registra-

Page 9: Fast Sinkhorn Filters: Using Matrix Scaling for Non-Rigid ...

Table 2: Here we show a quantitative evaluation on 300 FAUST regular shape pairs (left) v.s. 190 FAUST remeshed pairsand 153 SCAPE remeshed pairs (right). We compare our methods (ICP with Sinkhorn and ZoomOut with Sinkhorn) withthe baselines across different metrics.

Measurements/

Methods

Geometric Metrics Functional Metrics AverageRuntime

(s)Accuracy(×10−3)

Bijectivity(×10−3)

Coverage Smoothness OrthogonalityLaplacian

CommutativityZoomOut

EnergyChamferDistance

Ini 67.3 / 46.5 80.1 / 30.2 41.3 / 49.5 9.54 / 6.88 11.9 / 2.49 353 / 767 11.8 / 6.58 5.29 / 3.85 - / -ICP 76.0 / 30.4 75.0 / 10.2 76.8 / 76.4 8.09 / 5.11 1.38 / 1.07 224 / 493 4.21 / 3.46 2.89 / 2.56 10.2 / 5.32

ICP (sink) 68.6 / 29.5 4.38 / 8.07 90.1 / 87.3 13.0 / 5.89 1.42 / 1.21 255 / 448 4.69 / 3.38 2.92 / 2.42 30.4 / 15.8Deblur 61.9 / 44.4 75.0 / 22.4 39.9 / 43.7 7.62 / 4.80 11.2 / 2.63 361 / 805 12.1 / 6.79 4.64 / 3.31 10.9 / 10.4RHM 41.9 / 32.0 22.7 / 15.1 78.3 / 76.5 4.28 / 3.40 4.00 / 1.91 273 / 647 6.01 / 4.51 3.91 / 2.89 41.4 / 47.4PMF 26.4 / 86.4 1.99 / 37.7 100 / 100 24.0 / 34.9 1.11 / 2.44 164 / 576 4.02 / 7.97 2.72 / 4.09 737 / 312

BCICP 21.6 / 26.4 4.48 / 12.6 88.9 / 77.6 4.73 / 4.22 1.21 / 1.00 186 / 404 3.38 / 3.24 2.52 / 2.42 184 / 364ZoomOut 15.8 / 22.7 13.6 / 6.47 88.0 / 82.1 3.49 / 3.46 1.32 / 0.99 153 / 405 3.18 / 2.95 1.89 / 2.16 9.60 / 6.49

ZoomOut (sink) 12.6 / 20.8 1.57 / 6.44 93.9 / 88.5 3.44 / 3.40 1.37 / 0.99 148 / 394 3.21 / 3.07 1.91 / 2.17 28.2 / 19.08

tion methods were initialized with the same pair of basisthat were aligned with the adjoint map in Eq. (5).

Table 1 highlights the performance of each of these al-gorithms that are essential components to almost all cor-respondence methods including prominent recent ones like[9, 23, 45]. We measure the following metrics: accuracy,bijectivity, smoothness, coverage and average runtime foreach conversion. Please refer to the supplementary for de-tailed definitions of these metrics. Notice that the auc-tion algorithm although superior in a very ideal setting ofequal sampling and identical connectivity, is not robust toa change in the discretization of the surface. On the otherhand, even though the nearest neighbor has the smallest run-time, it suffers from poor accuracy, bijectivity and cover-age. The coherent point drift also suffers from a similardrawback of poor bijectivity. In contrast, the Fast SinkhornFilter shows an accurate and bijective output within a veryreasonable runtime and these properties are robust to theremeshing of the underlying surface.

We further demonstrate the efficacy of our Sinkhorn fil-ter by using it in conjunction with two prominent variantsof the iterative meta algorithm (Algorithm 1): ICP [28]and ZoomOut[23]. We replace the nearest neighbor algo-rithm in both methods with the Sinkhorn filter and comparethe Sinkhornized versions of ICP and Zoomout with pre-vious competing refinement methods on the FAUST regu-lar [5] and FAUST and SCAPE remeshed [31] datasets asshown in Table 2 and visualized in Figure 4. We compareextensively using the commonly used geometric and func-tional metrics. Please see the detailed definitions in the ap-pendix. The sinkhornized versions of ICP and ZoomOut areboth attributed with: better accuracy (20% improvement onFAUST regular and 8% on remeshed datasets), and betterbijectivity in the maps w.r.t. the original ICP/ZoomOut.

6. ConclusionIn conclusion, in this paper we propose a theoretical

foundation to the problem of pointwise conversion of func-

Source ICP(NN) ICP(sink) ZM(NN) ZM(sink)

template

ACM Reference Format:. 2020. template. ACM Trans. Graph. 1, 1 (November 2020), 2 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn

Author’s address:

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor pro�t or commercial advantage and that copies bear this notice and the full citationon the �rst page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior speci�c permission and/or afee. Request permissions from [email protected].© 2020 Association for Computing Machinery.0730-0301/2020/11-ART $15.00https://doi.org/10.1145/nnnnnnn.nnnnnnn

0 5 10 15 20

5

Number of Iterations

Groundtruth Error

0 5 10 15 20

5

Number of Iterations

Spectral Chamfer Distance

ICP-NN ICP-SinkhornZM-NN ZM-Sinkhorn

0 100 200 300

1

1.5

2

Number of Sinkhorn Iterations

Nor

mal

ized

Grou

ndTr

uth

Erro

r

NN (FAUST)NN (FAUST Remeshed)Sinkhorn (FAUST)Sinkhorn (FAUST Remeshed)

X

Y

Case 01

20 40 60 80 1000

0.35

Spectral basis size

Reco

very

erro

r

Recover TXY

use adjointuse fmap

20 40 60 80 100Spectral basis size

Recover TYX

use adjointuse fmap

X

Y

Case 02

20 40 60 80 1000

0.35

Spectral basis size

Reco

very

erro

r

Recover TXY

use adjointuse fmap

20 40 60 80 100Spectral basis size

Recover TYX

use adjointuse fmap

ACM Trans. Graph., Vol. 1, No. 1, Article . Publication date: November 2020.

Figure 4: Sinkhornizing ICP and Zoomout: Comparisonbetween the original and sinkhornized ICP and Zoomout re-finement algorithms. (top) Pointwise map errors visualizedon target shape. Replacing the nearest neighbor conver-sion step inside ICP/Zoomout with our Fast Sinkhorn Filterachieves improved accuracy and better spectral alignment.

tional maps and discuss its connection to regularized spec-tral alignment. Based on this foundation we propose a novelspectral registration technique using optimal transport forspectral alignment, and demonstrate that it improves accu-racy and bijectivity of correspondences both independentlyas well as in conjunction with existing iterative meta algo-rithms. One limitation of the Sinkhorn filter is the poor gen-eralization to the challenging cases of partiality which wewould like to address in future work.

Acknowledgements Parts of this work were supported bythe KAUST OSR Award No. CRG-2017-3426, the ERCStarting Grant No. 758800 (EXPROTEA), the ANR AIChair AIGRETTE and the ERC Starting Grant No. 802554(SPECGEO).

Page 10: Fast Sinkhorn Filters: Using Matrix Scaling for Non-Rigid ...

References[1] Yonathan Aflalo and Ron Kimmel. Spectral multidimen-

sional scaling. PNAS, 110(45):18052–18057, 2013. 2[2] Dimitri P Bertsekas. The auction algorithm: A distributed

relaxation method for the assignment problem. Annals ofoperations research, 14(1):105–123, 1988. 7

[3] Silvia Biasotti, Andrea Cerri, Alex Bronstein, and MichaelBronstein. Recent trends, applications, and perspectives in3d shape similarity assessment. Computer Graphics Forum,35(6):87–119, 2016. 2

[4] Federica Bogo, Javier Romero, Matthew Loper, andMichael J. Black. FAUST: Dataset and evaluation for 3Dmesh registration. In Proc. CVPR, pages 3794–3801, Colum-bus, Ohio, 2014. IEEE. 1

[5] Federica Bogo, Javier Romero, Matthew Loper, andMichael J Black. Faust: Dataset and evaluation for 3dmesh registration. In Proceedings of the IEEE Conferenceon Computer Vision and Pattern Recognition, pages 3794–3801, 2014. 7, 8

[6] Oliver Burghard, Alexander Dieckmann, and ReinhardKlein. Embedding shapes with Green’s functions for globalshape matching. Computers & Graphics, 68:1–10, 2017. 2

[7] Marco Cuturi. Sinkhorn distances: Lightspeed computationof optimal transport. In Advances in neural information pro-cessing systems, pages 2292–2300, 2013. 3

[8] Nicolas Donati, Abhishek Sharma, and Maks Ovsjanikov.Deep geometric functional maps: Robust feature learningfor shape correspondence. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition,pages 8592–8601, 2020. 1, 2

[9] Marvin Eisenberger, Aysim Toker, Laura Leal-Taixe, andDaniel Cremers. Deep shells: Unsupervised shape corre-spondence with optimal transport, 2020. 3, 8

[10] Danielle Ezuz and Mirela Ben-Chen. Deblurring and denois-ing of maps between shapes. Computer Graphics Forum,36(5):165–174, 2017. 1, 2

[11] Danielle Ezuz and Mirela Ben-Chen. Deblurring and denois-ing of maps between shapes. Computer Graphics Forum,36(5):165–174, 2017. 1, 2

[12] Dvir Ginzburg and Dan Raviv. Cyclic functional mapping:Self-supervised correspondence between non-isometric de-formable shapes. In European Conference on Computer Vi-sion, pages 36–52. Springer, 2020. 2

[13] Oshri Halimi, Or Litany, Emanuele Rodola, Alex M Bron-stein, and Ron Kimmel. Unsupervised learning of denseshape correspondence. In Proceedings of the IEEE Con-ference on Computer Vision and Pattern Recognition, pages4370–4379, 2019. 1, 2

[14] Qixing Huang, Fan Wang, and Leonidas Guibas. Functionalmap networks for analyzing and exploring large shape col-lections. ACM Transactions on Graphics (TOG), 33(4):1–11,2014. 1

[15] Ruqi Huang and Maks Ovsjanikov. Adjoint map represen-tation for shape analysis and matching. Computer GraphicsForum, 36(5):151–163, 2017. 1, 2, 3, 5

[16] Artiom Kovnatsky, Michael Bronstein, Alex Bronstein,Klaus Glashoff, and Ron Kimmel. Coupled quasi-harmonic

bases. Computer Graphics Forum, 32(2pt4):439–448, 2013.1, 2

[17] Artiom Kovnatsky, Michael M Bronstein, Xavier Bresson,and Pierre Vandergheynst. Functional correspondence bymatrix completion. In Proceedings of the IEEE conferenceon computer vision and pattern recognition, pages 905–914,2015. 2

[18] Or Litany, Tal Remez, Emanuele Rodola, Alex Bronstein,and Michael Bronstein. Deep functional maps: Structuredprediction for dense shape correspondence. In Proceedingsof the IEEE Conference on Computer Vision and PatternRecognition, pages 5659–5667. IEEE, 2017. 1, 2

[19] Or Litany, Emanuele Rodola, Alex Bronstein, and MichaelBronstein. Fully spectral partial shape matching. ComputerGraphics Forum, 36(2):247–258, 2017. 2

[20] Or Litany, Emanuele Rodola, Alexander M Bronstein,Michael M Bronstein, and Daniel Cremers. Non-rigid puz-zles. In Computer Graphics Forum, volume 35, pages 135–143. Wiley Online Library, 2016. 2

[21] Manish Mandad, David Cohen-Steiner, Leif Kobbelt, PierreAlliez, and Mathieu Desbrun. Variance-minimizing trans-port plans for inter-surface mapping. ACM Transactions onGraphics, 36:14, 2017. 3

[22] Riccardo Marin, Marie-Julie Rakotosaona, Simone Melzi,and Maks Ovsjanikov. Correspondence learning via linearly-invariant embedding, 2020. 2

[23] Simone Melzi, Jing Ren, Emanuele Rodola, AbhishekSharma, Peter Wonka, and Maks Ovsjanikov. Zoomout:Spectral upsampling for efficient shape correspondence.ACM Transactions on Graphics (TOG), 38(6):155:1–155:14,Nov. 2019. 1, 2, 6, 8

[24] Mark Meyer, Mathieu Desbrun, Peter Schroder, and Alan HBarr. Discrete differential-geometry operators for triangu-lated 2-manifolds. In Visualization and mathematics III,pages 35–57. Springer, 2003. 2, 4

[25] Andriy Myronenko and Xubo Song. Point set registration:Coherent point drift. IEEE transactions on pattern analysisand machine intelligence, 32(12):2262–2275, 2010. 7

[26] Dorian Nogneng, Simone Melzi, Emanuele Rodola, Um-berto Castellani, M Bronstein, and Maks Ovsjanikov. Im-proved functional mappings via product preservation. InComputer Graphics Forum, volume 37, pages 179–190. Wi-ley Online Library, 2018. 2

[27] Dorian Nogneng and Maks Ovsjanikov. Informative descrip-tor preservation via commutativity for shape matching. Com-puter Graphics Forum, 36(2):259–267, 2017. 2

[28] Maks Ovsjanikov, Mirela Ben-Chen, Justin Solomon, AdrianButscher, and Leonidas Guibas. Functional maps: a flexiblerepresentation of maps between shapes. ACM Transactionson Graphics (TOG), 31(4):30:1–30:11, 2012. 1, 2, 3, 5, 6, 8

[29] Maks Ovsjanikov, Etienne Corman, Michael Bronstein,Emanuele Rodola, Mirela Ben-Chen, Leonidas Guibas,Frederic Chazal, and Alex Bronstein. Computing and pro-cessing correspondences with functional maps. In ACM SIG-GRAPH 2017 Courses, pages 5:1–5:62, 2017. 2, 4

[30] Jing Ren, Mikhail Panine, Peter Wonka, and Maks Ovs-janikov. Structured regularization of functional map com-

Page 11: Fast Sinkhorn Filters: Using Matrix Scaling for Non-Rigid ...

putations. In Computer Graphics Forum, volume 38, pages39–53. Wiley Online Library, 2019. 5

[31] Jing Ren, Adrien Poulenard, Peter Wonka, and Maks Ovs-janikov. Continuous and orientation-preserving correspon-dences via functional maps. ACM Transactions on Graphics(TOG), 37(6), 2018. 2, 5, 7, 8

[32] Emanuele Rodola, Luca Cosmo, Michael M Bronstein, An-drea Torsello, and Daniel Cremers. Partial functional corre-spondence. In Computer Graphics Forum, volume 36, pages222–236. Wiley Online Library, 2017. 1, 2

[33] Emanuele Rodola, Michael Moeller, and Daniel Cremers.Point-wise map recovery and refinement from functional cor-respondence. In Proc. Vision, Modeling and Visualization(VMV), 2015. 1, 2, 7

[34] Emanuele Rodola, Michael Moller, and Daniel Cremers.Regularized pointwise map recovery from functional corre-spondence. In Computer Graphics Forum, volume 36, pages700–711. Wiley Online Library, 2017. 1, 2, 7

[35] Jean-Michel Roufosse, Abhishek Sharma, and Maks Ovs-janikov. Unsupervised deep learning for structured shapematching. In Proceedings of the IEEE International Con-ference on Computer Vision, pages 1617–1627, 2019. 1, 2

[36] R. Rustamov, M. Ovsjanikov, O. Azencot, M. Ben-Chen, F.Chazal, and L. Guibas. Map-based exploration of intrin-sic shape differences and variability. ACM Transactions onGraphics (TOG), 32(4):72:1–72:12, July 2013. 2

[37] Raif M Rustamov. Laplace-beltrami eigenfunctions for de-formation invariant shape representation. In Proc. SGP,pages 225–233. Eurographics Association, 2007. 4

[38] Yusuf Sahillioglu. Recent advances in shape correspon-dence. The Visual Computer, 36(8):1705–1721, 2020. 1,2

[39] Justin Solomon, Fernando De Goes, Gabriel Peyre, MarcoCuturi, Adrian Butscher, Andy Nguyen, Tao Du, andLeonidas Guibas. Convolutional wasserstein distances: Ef-ficient optimal transportation on geometric domains. ACMTransactions on Graphics (TOG), 34(4):1–11, 2015. 3, 7

[40] Justin Solomon, Gabriel Peyre, Vladimir G Kim, and SuvritSra. Entropic metric alignment for correspondence prob-lems. ACM Transactions on Graphics (TOG), 35(4):72,2016. 3

[41] Robert W Sumner and Jovan Popovic. Deformation trans-fer for triangle meshes. In ACM Transactions on Graphics(TOG), volume 23, pages 399–405. ACM, 2004. 1

[42] Gary KL Tam, Zhi-Quan Cheng, Yu-Kun Lai, Frank C Lang-bein, Yonghuai Liu, David Marshall, Ralph R Martin, Xian-Fang Sun, and Paul L Rosin. Registration of 3D point cloudsand meshes: a survey from rigid to nonrigid. IEEE TVCG,19(7):1199–1217, 2013. 2

[43] Oliver Van Kaick, Hao Zhang, Ghassan Hamarneh, andDaniel Cohen-Or. A survey on shape correspondence. Com-puter Graphics Forum, 30(6):1681–1707, 2011. 1, 2

[44] Matthias Vestner, Zorah Lahner, Amit Boyarski, Or Litany,Ron Slossberg, Tal Remez, Emanuele Rodola, Alex Bron-stein, Michael Bronstein, and Ron Kimmel. Efficient de-formable shape correspondence via kernel matching. In 3DVision (3DV), 2017 International Conference on, pages 517–526. IEEE, 2017. 2

[45] Matthias Vestner, Roee Litman, Emanuele Rodola, AlexBronstein, and Daniel Cremers. Product manifold filter:Non-rigid shape correspondence via kernel density estima-tion in the product space. In Proc. CVPR, pages 6681–6690,2017. 1, 2, 3, 6, 8

[46] Fan Wang, Qixing Huang, and Leonidas J Guibas. Image co-segmentation via consistent functional maps. In Proceedingsof the IEEE International Conference on Computer Vision,pages 849–856, 2013. 2

[47] Y Wang, B Liu, K Zhou, and Y Tong. Vector field map repre-sentation for near conformal surface correspondence. Com-puter Graphics Forum, 37(6):72–83, 2018. 2