Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun...

29
arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: Technical Report Pierre-Yves Baudin 16 , Danny Goodman 13 , Puneet Kumar 13 , Noura Azzabou 46 , Pierre G. Carlier 46 , Nikos Paragios 13 , and M. Pawan Kumar 13 1 Center for Visual Computing, ´ Ecole Centrale Paris, FR 2 Universit´ e Paris-Est, LIGM (UMR CNRS), ´ Ecole des Ponts ParisTech, FR 3 ´ Equipe Galen, INRIA Saclay, FR 4 Institute of Myology, Paris, FR 5 CEA, I 2 BM, MIRCen, IdM NMR Laboratory, Paris, FR 6 UPMC University Paris 06, Paris, FR 1 Literature Survey on the State of the Art Algorithms in Muscle Segmentation In the following, we present the principal methods addressing the segmentation of skeletal muscles. Muscle Segmentation using Simplex meshes with medial representations A skele- tal muscle segmentation method was presented in [8] based on simplex meshes [6]. Considering a 3D surface model, simplex meshes are discrete meshes where each vertex has exactly 3 neighbors. Having a constant connectivity allows to simply parametrize the location of one vertex with respect to its neighbors, and thus parametrize deformation of the shape – translation, rotation, scaling – in a lo- cal manner. Indeed, the location of a pixel, denoted as x can be expressed as a linear combination of the locations of the three neighbors plus a local elevation term parallel to the local normal: x = ε 1 x 1 + ε 2 x 2 + (1 ε 1 ε 2 ) x 3 + hn. As a result, many local measurements – including curvature and cell surface – can be computed efficiently and global energy terms enforcing local constraints come up naturally. Here, the authors impose local smoothing via curvature averaging, which does not tend to reduce the surface like 1-order operators typically do. Prior knowledge is imposed by constraining the local scale changes on the elevation parameter with respect to a reference shape. Denoting the surface of the tri- angle formed by the three neighbors of a pixel as S, given the reference shape parameters ˜ ε 1 , ˜ ε 2 , ˜ h, ˜ S , the new location of the considered pixel is expressed as: x ε 1 x 1 ε 2 x 2 + (1 ˜ ε 1 ˜ ε 2 ) x 3 + ˜ h S/ ˜ S 1n, (1) where β [2, +[ is a parameter which sets the amount of allowed local de- formation: with β = 2 this definition is similitude invariant; with β =+this

Transcript of Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun...

Page 1: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

arX

iv:1

306.

1083

v1 [

cs.C

V]

5 J

un 2

013

Discriminative Parameter Estimation

for Random Walks Segmentation:

Technical Report

Pierre-Yves Baudin1−6, Danny Goodman1−3, Puneet Kumar1−3,Noura Azzabou4−6, Pierre G. Carlier4−6, Nikos Paragios1−3, and

M. Pawan Kumar1−3

1 Center for Visual Computing, Ecole Centrale Paris, FR2 Universite Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ParisTech, FR

3 Equipe Galen, INRIA Saclay, FR4 Institute of Myology, Paris, FR

5 CEA, I2 BM, MIRCen, IdM NMR Laboratory, Paris, FR6 UPMC University Paris 06, Paris, FR

1 Literature Survey on the State of the Art Algorithms

in Muscle Segmentation

In the following, we present the principal methods addressing the segmentationof skeletal muscles.

Muscle Segmentation using Simplex meshes with medial representations A skele-tal muscle segmentation method was presented in [8] based on simplex meshes [6].Considering a 3D surface model, simplex meshes are discrete meshes where eachvertex has exactly 3 neighbors. Having a constant connectivity allows to simplyparametrize the location of one vertex with respect to its neighbors, and thusparametrize deformation of the shape – translation, rotation, scaling – in a lo-cal manner. Indeed, the location of a pixel, denoted as x can be expressed as alinear combination of the locations of the three neighbors plus a local elevationterm parallel to the local normal: x = ε1x1 + ε2x2 + (1− ε1 − ε2)x3 + hn. As aresult, many local measurements – including curvature and cell surface – can becomputed efficiently and global energy terms enforcing local constraints comeup naturally.

Here, the authors impose local smoothing via curvature averaging, whichdoes not tend to reduce the surface like 1-order operators typically do. Priorknowledge is imposed by constraining the local scale changes on the elevationparameter with respect to a reference shape. Denoting the surface of the tri-angle formed by the three neighbors of a pixel as S, given the reference shape

parameters(

ε1, ε2, h, S)

, the new location of the considered pixel is expressedas:

x = ε1x1 + ε2x2 + (1− ε1 − ε2)x3 + h(

S/S)1/β

n, (1)

where β ∈ [2,+∞[ is a parameter which sets the amount of allowed local de-formation: with β = 2 this definition is similitude invariant; with β = +∞ this

Page 2: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

2 Pierre-Yves Baudin et al.

definition is invariant through rigid transformations only. The model is attachedto the target image through either gradient norm maximization in the directionof the gradient at the location of the vertices, or maximization of similaritiesbetween the reference and the target images at the vertices location.

A medial representation – similar to the M-reps [13] – is combined with thesimplex parametrization to exploit the specific tubular shapes of the skeletalmuscles. Medial vertices are added to the model, constrained to remain on themedial axis of the tubular objects. This is achieved by connecting the new ver-tices to the surface vertices through spring-like forces. This constrains the globalstructure to resemble its initial reference shape, thus acting as a global shapeprior. This medial axis representation also allows efficient collision handling. Themodel is fit to the image through an iterative process of successive local evolu-tions. Such model appear to always yield a valid solution, sometimes at the priceof an excessive regularization or lack of adaptability to the specifics of the tar-get image. paragraphMuscle Segmentation using Deformable Models and ShapeMatching

Muscle Segmentation using Deformable Models and Shape Matching A shapeprior for muscle segmentation in 3D images was presented in [9], deriving from acomputer animation technique, called shape matching, used to efficiently approx-imate large soft-tissue elastic deformations. This method was applied to musclesegmentation with some success. In this approach, discrete meshes are used toparametrize the moving surface. Let x0 be the vector containing the initial posi-tion of the control points of the parametric surface. Clustering is performed onx0 such that each cluster ζi contains at least a certain number of vertices (set bythe user). During segmentation, the evolution of the active surface is performedaccording to the following iterative procedure:

1. Shift vertices according to the external force: xt = fext + xt. The external“force” fext is computed as the maximal gradient search in the gradientdirection.

2. Regularize vertex positions:

(a) Compute rigid registration for each clusters:

Ti = argmin∑

j∈ζi

∥Tix0j − xt

j

2, (2)

(b) Average target position for each vertex:

xt+1i =

1

|ζi|

j∈ζi

Tjx0j . (3)

Single reference prior models are convenient in that they require only one anno-tated example of the objects of interest. However, when segmenting a class ofobjects whose shape varies a lot, such approach becomes too constraining anddoes not allow the model to adopt valid shapes which are too different from thesingle reference.

Page 3: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

Parameter Estimation for RW Segmentation: Technical Report 3

Muscle segmentation using a hierarchical statistical shape model A hierarchicalprior model using Diffusion Wavelets was proposed to segment organs in [7] –including one calf muscle – in MRI. This model builds on the formulation ofthe ASM [4], using a different basis for the subspace of valid solutions. Oneof the main drawbacks of ASMs, is that they often require a large number oftraining data in order to obtain relevant decomposition modes. Indeed, somenon-correlated shape features – such as global and local shape configurations –are often modeled by the same deformation modes. Thus, desired shape behav-iors are often mixed with unwanted shape behaviors when optimizing the shapeparameters for segmenting a new image. The hierarchical approach allows to un-correlate small and large scale shape behaviors. Moreover, the presented methodalso uncorrelates long-range shape behaviors, thus ensuring that deformationmode are spatially localized.

We give a brief summary of this method. First, a graph G (V , E) is builton the set of landmarks: V is the set of nodes and each landmark correspondsto a node in V ; E is the set of edges, whose weights are determined through astatistical analysis of the mutual relations between the landmarks in the trainingset xkk=1...K (cf. Shape Maps [12]). As a result, landmarks with independentbehaviors will be connected by edges with a small weight, whereas nodes withstrongly related behaviors – such as neighboring points – will be connected bylarge weight edges.

Second, a Diffusion Wavelet decomposition of G is performed. This processinvolves computing the diffusion operator T of graph G, which is the symmet-ric normalized Laplace-Beltrami operator, and computing and compressing thedyadic powers of T. The output of this decomposition is a hierarchical orthogo-nal basis Γi for the graph, whose vectors correspond to different graph scales;considering the vector of landmark positions when decomposed on the new basis:

x = x+ Γp, (4)

global deformations – i.e. global relations between all the nodes – are controlledby some of the coefficients in p, while local interactions – i.e. local interactionsbetween close-by nodes – are controlled by some other coefficients in p. Project-ing all the training examples onto this new basis, a PCA is performed at each

scale of the decomposition. Finally, during the segmentation process, the land-marks are positioned on the target image in an iterative manner: 1) the positionof the landmarks is updated according to a local appearance model; 2) they areprojected into the hierarchical subspace defined previously.

Muscle segmentation using a continuous region model A region-based segmen-tation method, proposed in [5], was extended to multi-label segmentation in [2],and applied to skeletal muscle segmentation [1]. Before performing the PCA onthe training samples, an Isometric Log-Ratio (ILR) transform is applied to theassignment vectors. The reason for using this transform is that multi-label seg-mentation requires to have probabilities at all times, which the previous methoddoes not achieve. Here, the PCA is performed in the ILR space and its output

Page 4: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

4 Pierre-Yves Baudin et al.

is projected back into the initial probability space. Denoting ηγ = µ + Γγ asegmentation in the subspace of valid solution spanned by the PCA in the ILRspace, the following functional is proposed:

E (ηγ) = d (ηBG,ηγ)2+

(1− h (x)) |∇ηγ |2+ γTΣ−1γ, (5)

where d (ηBG, ·) is an intensity prior functional for separating muscle voxels frombackground voxels, and h (x) is and edge-map of the target image such that theenergy is minimal when the boundaries of the model match the edges in theimage.

2 The Random Walks Algorithm

The Random Walks algorithm is graph-based: consider a graph G = (V , E),where V is a set of nodes – corresponding to each voxel in the 3d image – andE is a set of edges – one per pair of adjacent voxels. Let us also denote a set oflabels – one per object to segment – as S.

The aim of the Random Walks Algorithm [10] is to compute an assignmentprobability of all voxels to all labels. These probabilities depend on: i) contrastbetween adjacent voxels, ii) manual – thus deterministic – assignments of somevoxels, called seeds, and iii) prior assignment probabilities.

The probabilities, contained in vector y, can be obtained by minimizing thefollowing functional:

ERWprior(x,y) = y⊤Ly + w‖y − y0‖2Ω(x)

= y⊤

[

α

wαLα

]

y +∑

β

wβ‖y − yβ‖2Ωβ,

which is a linear combination of Laplacians and prior terms.The Laplacian matrices Lα contain the contrast terms. Its entries are of the

form:

Li,j =

k ωkj if i = j,

−ωij if (i, j) ∈ E ,

0 otherwise.

(6)

Here, ωij designates the weight of edge (i, j). It is usually computed as follows:

ωij = exp(

−β (Ii − Ij)2)

, (7)

where Ii is the intensity of voxel i.In our experiments, we used three different Laplacians using this formnu-

lation, with three values of β: 50, 100 and 150 (with the image voxel valuesnormalized with their empirical standard deviation).

Page 5: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

Parameter Estimation for RW Segmentation: Technical Report 5

We also implemented the lesser used alternate formulation:

wij =1

β |Ii − Ij |+ ε, (8)

which we employed in one additional Laplacian term with β = 100 and ε = 1for comparison purposes, since the selected values do give good results on theirown.

Since the objective function is quadratic in y, its minimum can be com-puted by minimizing a linear system. The quadratic term, composed of a sum ofLaplacians and diagonal matrices due to the prior term, is very sparse and hasa specific structure due to the fact that only adjacent voxels are connected withan edge.

Given the size of the problem (several millions of variables), this system hasto be solved with iterative methods, such as Conjugate Gradient. The specificstructure of the problem and the existence of parallelized algorithms (such asmultigrid Conjugate Gradient) allow for an efficient optimization. For instance,our own implementation takes less than 20s for volumes of size 200× 200× 100on a regular desktop machine.

3 Derivation of the Latent SVM Upper Bound

Given a dataset D = (xk, zk) , k = 1, . . . , N, which consists of inputs xk andtheir hard segmentation zk, we would like to estimate parameters w such thatthe resulting inferred segmentations are accurate. Here, the accuracy is measuredusing the loss function ∆ (·, ·). Formally, let yk (w) denote the soft segmentationobtained by minimizing the energy functional E (·,xk; w) for the k-th trainingsample, that is,

yk (w) = argminy

w⊤ψ (xk,y) . (9)

We would like to learn the parameters w such that the empirical risk isminimized over all samples in the dataset. In other words, we would like toestimate the parameters w⋆ such that

w⋆ = argminw

1

N

k

∆ (zk, yk (w)) . (10)

The above objective function is highly non-convex in w, which makes it proneto bad local minimum solutions. To alleviate this deficiency, the latent SVM

Page 6: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

6 Pierre-Yves Baudin et al.

formulation upper bounds the risk for a sample (x, z) as follows:

∆ (zk, yk (w)) = ∆ (zk, yk (w)) +w⊤ [ψ (xk, yk (w))− ψ (xk, yk (w))] , (11)

≤ min∆(zk,y)=0

w⊤ψ (xk, y) (12)

−[

w⊤ψ (xk, yk (w))−∆ (zk, yk (w))]

,

≤ min∆(zk,y)=0

w⊤ψ (xk, y) (13)

−miny

[

w⊤ψ (xk,y)−∆ (zk,y)]

.

The first inequality follows from the fact that the prediction yk (w) has theminimum possible energy (see equation 9). Thus, its energy has to be less thanor equal to the energy of any compatible segmentation y with ∆ (zk, y) = 0.The second inequality is true since it replaces the loss augmented energy of theprediction yk (w) with the minimum loss augmented energy.

This inequality leads to the following minimization problem:

minw, ξk≥0

λ ‖w‖2 +1

N

k

ξk, (14)

s.t. min∆(xk,y)=0

w⊤ψ (xk, y) ≤ w⊤ψ (xk, y)−∆ (zk, y) + ξk, ∀y, ∀k,

where λ ‖w‖2 is a regularization term, preventing overfitting the parameters tothe training data.

4 Dual-Decomposition Algorithm for the ACI

Briefly, dual decomposition allows us to iteratively solve a convex optimizationproblem of the form

y∗ = argminy∈F

M∑

m=1

gm(y). (15)

At each iteration t it solves a set of slaves problems

y∗m = argmin

ym∈F

(

gm(ym) + ρtmym

)

, (16)

where ρtm are the dual variables satisfying∑

m ρtm = 0. The dual variables areinitialized as ρ0m = 0, ∀m, and updated at iteration t as follows:

ρt+1m ← ρtm + ηt(y∗

m −∑

n

y∗n/M), (17)

where ηt is the learning rate at iteration t. Under fairly general conditions,this iterative strategy converges to the globally optimal solution of the originalproblem, that is, y∗ = y∗

m, ∀m. We refer the reader to [3,11] for details.In order to specify our slave problems, we divide the set of voxels V into

subsets Vm,m = 1, · · · ,M , such that each pair of neighboring voxels (i, j) ∈ N

Page 7: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

Parameter Estimation for RW Segmentation: Technical Report 7

appear together in exactly one subset Vm. Given such a division of voxels, ourslave problems correspond to the following:

minym∈C(Vm)

y⊤mLm(x;w)ym + Eprior

m (ym,x;w) + ρtmym, (18)

where Lm(x;w) is the Laplacian corresponding to the voxels Vm. The priorenergy functions Eprior

m modify the original prior Eprior by weighing each voxeli ∈ Vm by the reciprocal of the number of subsets Vn that contain i. In otherwords, the prior term for each voxel i ∈ Vm is multiplied by 1/|Vn, i ∈ Vn|.

The slave problems defined above can be shown to provide a valid reparam-eterization of the original problem:

miny∈C(V)

y⊤L(x;w)y + Eprior(y,x;w). (19)

By using small subsets Vm we can optimize each slave problem in every iterationusing a standard quadratic programming solver. In our experiments, we usedthe Mosek solver. To the best of our knowledge, this is the first application ofdual decomposition to solve a probabilistic segmentation problem under linearconstraints.

References

1. Andrews, S., Hamarneh, G., Yazdanpanah, A., HajGhanbari, B.,Reid, W.D.: Probabilistic multi-shape segmentation of knee exten-sor and flexor muscles. In: MICCAI, vol. 14, pp. 651–8 (Jan 2011),http://www.ncbi.nlm.nih.gov/pubmed/22003755

2. Andrews, S., McIntosh, C., Hamarneh, G.: Convex Multi-RegionProbabilistic Segmentation with Shape Prior in the Isometric Log-Ratio Transformation Space. Medical Image Analysis pp. 1–17 (2011),http://scholar.google.com/scholar?hl=en&btnG=Search&q=intitle:Convex+Multi-Region+Probabilistic+Segmentation+with+Shape+Prior+in+the+Isometric+Log-Ratio+Transformation+Space#0

3. Bertsekas, D.: Nonlinear Programming. Athena Scientific (1999)4. Cootes, T.F., Taylor, C.J., Cooper, D.H., Graham, J.: Active shape models-their

training and application. Computer vision and image understanding 61(1), 38–59(1995), http://www.isbe.man.ac.uk/~bim/Papers/cootes_cviu95.pdf

5. Cremers, D., Schmidt, F.R., Barthel, F.: Shape priors in variational image seg-mentation: Convexity, Lipschitz continuity and globally optimal solutions. 2008IEEE Conference on Computer Vision and Pattern Recognition pp. 1–6 (Jun 2008),http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4587446

6. Delingette, H.: Modelisation, Deformation et Reconnais-sance d’objets tridimensionnels a l’aide de maillages sim-plexes. These de sciences, Ecole Centrale de Paris (July 1994),http://www.inria.fr/sophia/asclepios/Publications/Herve.Delingette/These-Delingette.pdf

7. Essafi, S., Langs, G., Paragios, N.: Hierarchical 3D diffusionwavelet shape priors. In: ICCV. pp. 1717–1724. IEEE (Sep 2009),http://dspace.mit.edu/handle/1721.1/59305http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5459385

8. Gilles, B., Magnenat-Thalmann, N.: Musculoskeletal MRI segmentation usingmulti-resolution simplex meshes with medial representations. Medical image anal-ysis 14(3), 291–302 (Jun 2010), http://www.ncbi.nlm.nih.gov/pubmed/20303319

Page 8: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

8 Pierre-Yves Baudin et al.

9. Gilles, B., Pai, D.K.: Fast musculoskeletal registration based on shape matching.In: MICCAI, Lecture Notes in Computer Science, vol. 5242, pp. 822–829 (Jan2008), http://www.ncbi.nlm.nih.gov/pubmed/18982681

10. Grady, L.: Random walks for image segmentation. PAMI (2006)11. Komodakis, N., Paragios, N., Tziritas, G.: MRF optimization via dual decompo-

sition: Message-passing revisited. In: ICCV (2007)12. Langs, G., Paragios, N., Mas, L., Paris, E.C.D., Galen, E., Saclay, I.: Modeling the

structure of multivariate manifolds: Shape Maps (2008)13. Pizer, S.M., Fletcher, P.T., Joshi, S., Thall, A., Chen, J.Z., Fridman, Y., Fritsch,

D.S., Gash, A.G., Glotzer, J.M., Jiroutek, M.R., et al.: Deformable m-reps for 3dmedical image segmentation. International Journal of Computer Vision 55(2-3),85–106 (2003)

Page 9: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

arX

iv:1

306.

1083

v1 [

cs.C

V]

5 J

un 2

013

Preface

This textbook is intended for use by students of physics, physical chemistry,and theoretical chemistry. The reader is presumed to have a basic knowledgeof atomic and quantum physics at the level provided, for example, by the firstfew chapters in our book The Physics of Atoms and Quanta. The student ofphysics will find here material which should be included in the basic educationof every physicist. This book should furthermore allow students to acquire anappreciation of the breadth and variety within the field of molecular physics andits future as a fascinating area of research.

For the student of chemistry, the concepts introduced in this book will providea theoretical framework for that entire field of study. With the help of these con-cepts, it is at least in principle possible to reduce the enormous body of empiricalchemical knowledge to a few basic principles: those of quantum mechanics. Inaddition, modern physical methods whose fundamentals are introduced here arebecoming increasingly important in chemistry and now represent indispensabletools for the chemist. As examples, we might mention the structural analysis ofcomplex organic compounds, spectroscopic investigation of very rapid reactionprocesses or, as a practical application, the remote detection of pollutants in theair.

April 1995 Walter OlthoffProgram Chair

ECOOP’95

Page 10: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

Organization

ECOOP’95 is organized by the department of Computer Science, Univeristyof Arhus and AITO (association Internationa pour les Technologie Object) incooperation with ACM/SIGPLAN.

Executive Commitee

Conference Chair: Ole Lehrmann Madsen (Arhus University, DK)Program Chair: Walter Olthoff (DFKI GmbH, Germany)Organizing Chair: Jørgen Lindskov Knudsen (Arhus University,

DK)Tutorials: Birger Møller-Pedersen

(Norwegian Computing Center, Norway)Workshops: Eric Jul (University of Kopenhagen, Denmark)Panels: Boris Magnusson (Lund University, Sweden)Exhibition: Elmer Sandvad (Arhus University, DK)Demonstrations: Kurt Nørdmark (Arhus University, DK)

Program Commitee

Conference Chair: Ole Lehrmann Madsen (Arhus University, DK)Program Chair: Walter Olthoff (DFKI GmbH, Germany)Organizing Chair: Jørgen Lindskov Knudsen (Arhus University,

DK)Tutorials: Birger Møller-Pedersen

(Norwegian Computing Center, Norway)Workshops: Eric Jul (University of Kopenhagen, Denmark)Panels: Boris Magnusson (Lund University, Sweden)Exhibition: Elmer Sandvad (Arhus University, DK)Demonstrations: Kurt Nørdmark (Arhus University, DK)

Referees

V. AndreevBarwolffE. BarreletH.P. BeckG. BernardiE. BinderP.C. Bosetti

BraunschweigF.W. BusserT. CarliA.B. CleggG. CozzikaS. DagoretDel Buono

P. DingusH. DuhmJ. EbertS. EichenbergerR.J. EllisonFeltesseW. Flauger

Page 11: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

III

A. FomenkoG. FrankeJ. GarveyM. GennisL. GoerlichP. GoritchevH. GreifE.M. HanlonR. HaydarR.C.W. HendersoP. HillH. HufnagelA. JacholkowskaJohannsenS. KasarianI.R. KenyonC. KleinwortT. KohlerS.D. KolyaP. Kostka

U. KrugerJ. KurzhoferM.P.J. LandonA. LebedevCh. LeyF. LinselH. LohmandMartinS. MassonK. MeierC.A. MeyerS. MikockiJ.V. MorrisB. NaroskaNguyenU. ObrockG.D. PatelCh. PichlerS. PrellF. Raupach

V. RiechP. RobmannN. SahlmannP. SchleperSchoningB. SchwabA. SemenovG. SiegmonJ.R. SmithM. SteenbockU. StraumannC. ThiebauxP. Van Eschfrom Yerevan PhL.R. WestG.-G. WinterT.P. YiouM. Zimmer

Sponsoring Institutions

Bernauer-Budiman Inc., Reading, Mass.The Hofmann-International Company, San Louis Obispo, Cal.Kramer Industries, Heidelberg, Germany

Page 12: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

Table of Contents

Hamiltonian Mechanics

Hamiltonian Mechanics unter besonderer Berucksichtigung derhohreren Lehranstalten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Ivar Ekeland, Roger Temam, Jeffrey Dean, David Grove, CraigChambers, Kim B. Bruce, and Elisa Bertino

Hamiltonian Mechanics2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7Ivar Ekeland and Roger Temam

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Subject Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Page 13: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

Hamiltonian Mechanics unter besonderer

Berucksichtigung der hohreren Lehranstalten

Ivar Ekeland1, Roger Temam2 Jeffrey Dean, David Grove, Craig Chambers,Kim B. Bruce, and Elsa Bertino

1 Princeton University, Princeton NJ 08544, USA,[email protected],

WWW home page: http://users/~iekeland/web/welcome.html2 Universite de Paris-Sud, Laboratoire d’Analyse Numerique, Batiment 425,

F-91405 Orsay Cedex, France

Abstract. The abstract should summarize the contents of the paperusing at least 70 and at most 150 words. It will be set in 9-point fontsize and be inset 1.0 cm from the right and left margins. There will betwo blank lines before and after the Abstract. . . .

Keywords: computational geometry, graph theory, Hamilton cycles

1 Fixed-Period Problems: The Sublinear Case

With this chapter, the preliminaries are over, and we begin the search for periodicsolutions to Hamiltonian systems. All this will be done in the convex case; thatis, we shall study the boundary-value problem

x = JH ′(t, x)

x(0) = x(T )

with H(t, ·) a convex function of x, going to +∞ when ‖x‖ → ∞.

1.1 Autonomous Systems

In this section, we will consider the case when the Hamiltonian H(x) is au-tonomous. For the sake of simplicity, we shall also assume that it is C1.

We shall first consider the question of nontriviality, within the general frame-work of (A∞, B∞)-subquadratic Hamiltonians. In the second subsection, we shalllook into the special case when H is (0, b∞)-subquadratic, and we shall try toderive additional information.

The General Case: Nontriviality. We assume that H is (A∞, B∞)-sub-quadratic at infinity, for some constant symmetric matrices A∞ and B∞, withB∞ −A∞ positive definite. Set:

γ : = smallest eigenvalue of B∞ −A∞ (1)

λ : = largest negative eigenvalue of Jd

dt+A∞ . (2)

Page 14: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

2

Theorem 1 tells us that if λ+ γ < 0, the boundary-value problem:

x = JH ′(x)x(0) = x(T )

(3)

has at least one solution x, which is found by minimizing the dual action func-tional:

ψ(u) =

∫ T

o

[1

2

(Λ−1

o u, u)+N∗(−u)

]dt (4)

on the range of Λ, which is a subspace R(Λ)2L with finite codimension. Here

N(x) := H(x)−1

2(A∞x, x) (5)

is a convex function, and

N(x) ≤1

2((B∞ −A∞)x, x) + c ∀x . (6)

Proposition 1. Assume H ′(0) = 0 and H(0) = 0. Set:

δ := lim infx→0

2N(x) ‖x‖−2. (7)

If γ < −λ < δ, the solution u is non-zero:

x(t) 6= 0 ∀t . (8)

Proof. Condition (7) means that, for every δ′ > δ, there is some ε > 0 such that

‖x‖ ≤ ε⇒ N(x) ≤δ′

2‖x‖2 . (9)

It is an exercise in convex analysis, into which we shall not go, to show thatthis implies that there is an η > 0 such that

f ‖x‖ ≤ η ⇒ N∗(y) ≤1

2δ′‖y‖2 . (10)

Fig. 1. This is the caption of the figure displaying a white eagle and a white horse ona snow field

Page 15: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

3

Since u1 is a smooth function, we will have ‖hu1‖∞ ≤ η for h small enough,and inequality (10) will hold, yielding thereby:

ψ(hu1) ≤h2

2

1

λ‖u1‖

2

2+h2

2

1

δ′‖u1‖

2. (11)

If we choose δ′ close enough to δ, the quantity(1

λ + 1

δ′

)will be negative, and

we end up withψ(hu1) < 0 for h 6= 0 small . (12)

On the other hand, we check directly that ψ(0) = 0. This shows that 0 cannotbe a minimizer of ψ, not even a local one. So u 6= 0 and u 6= Λ−1

o (0) = 0. ⊓⊔

Corollary 1. Assume H is C2 and (a∞, b∞)-subquadratic at infinity. Let ξ1,. . . , ξN be the equilibria, that is, the solutions of H ′(ξ) = 0. Denote by ωk thesmallest eigenvalue of H ′′ (ξk), and set:

ω := Min ω1, . . . , ωk . (13)

If:T

2πb∞ < −E

[−T

2πa∞

]<

T

2πω (14)

then minimization of ψ yields a non-constant T -periodic solution x.

We recall once more that by the integer part E[α] of α ∈ IR, we mean thea ∈ ZZ such that a < α ≤ a + 1. For instance, if we take a∞ = 0, Corollary 2tells us that x exists and is non-constant provided that:

T

2πb∞ < 1 <

T

2π(15)

or

T ∈

(2π

ω,2π

b∞

). (16)

Proof. The spectrum of Λ is 2πT ZZ + a∞. The largest negative eigenvalue λ is

given by 2πT ko + a∞, where

Tko + a∞ < 0 ≤

T(ko + 1) + a∞ . (17)

Hence:

ko = E

[−T

2πa∞

]. (18)

The condition γ < −λ < δ now becomes:

b∞ − a∞ < −2π

Tko − a∞ < ω − a∞ (19)

which is precisely condition (14). ⊓⊔

Page 16: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

4

Lemma 1. Assume that H is C2 on IR2n\0 and that H ′′(x) is non-degeneratefor any x 6= 0. Then any local minimizer x of ψ has minimal period T .

Proof. We know that x, or x + ξ for some constant ξ ∈ IR2n, is a T -periodicsolution of the Hamiltonian system:

x = JH ′(x) . (20)

There is no loss of generality in taking ξ = 0. So ψ(x) ≥ ψ(x) for all x insome neighbourhood of x in W 1,2

(IR/TZZ; IR2n

).

But this index is precisely the index iT (x) of the T -periodic solution x overthe interval (0, T ), as defined in Sect. 2.6. So

iT (x) = 0 . (21)

Now if x has a lower period, T/k say, we would have, by Corollary 31:

iT (x) = ikT/k(x) ≥ kiT/k(x) + k − 1 ≥ k − 1 ≥ 1 . (22)

This would contradict (21), and thus cannot happen. ⊓⊔

Notes and Comments. The results in this section are a refined version of [1]; theminimality result of Proposition 14 was the first of its kind.

To understand the nontriviality conditions, such as the one in formula (16),one may think of a one-parameter family xT , T ∈

(2πω−1, 2πb−1

)of periodic

solutions, xT (0) = xT (T ), with xT going away to infinity when T → 2πω−1,which is the period of the linearized system at 0.

Table 1. This is the example table taken out of The TEXbook, p. 246

Year World population

8000 B.C. 5,000,00050 A.D. 200,000,000

1650 A.D. 500,000,0001945 A.D. 2,300,000,0001980 A.D. 4,400,000,000

Theorem 1 (Ghoussoub-Preiss). Assume H(t, x) is (0, ε)-subquadratic atinfinity for all ε > 0, and T -periodic in t

H(t, ·) is convex ∀t (23)

H(·, x) is T−periodic ∀x (24)

H(t, x) ≥ n (‖x‖) with n(s)s−1 →∞ as s→∞ (25)

Page 17: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

5

∀ε > 0 , ∃c : H(t, x) ≤ε

2‖x‖2 + c . (26)

Assume also that H is C2, and H ′′(t, x) is positive definite everywhere. Thenthere is a sequence xk, k ∈ IN, of kT -periodic solutions of the system

x = JH ′(t, x) (27)

such that, for every k ∈ IN, there is some po ∈ IN with:

p ≥ po ⇒ xpk 6= xk . (28)

⊓⊔

Example 1 (External forcing). Consider the system:

x = JH ′(x) + f(t) (29)

where the Hamiltonian H is (0, b∞)-subquadratic, and the forcing term is adistribution on the circle:

f =d

dtF + fo with F ∈ L2

(IR/TZZ; IR2n

), (30)

where fo := T−1∫ T

o f(t)dt. For instance,

f(t) =∑

k∈IN

δkξ , (31)

where δk is the Dirac mass at t = k and ξ ∈ IR2n is a constant, fits the pre-scription. This means that the system x = JH ′(x) is being excited by a seriesof identical shocks at interval T .

Definition 1. Let A∞(t) and B∞(t) be symmetric operators in IR2n, dependingcontinuously on t ∈ [0, T ], such that A∞(t) ≤ B∞(t) for all t.

A Borelian function H : [0, T ]× IR2n → IR is called (A∞, B∞)-subquadraticat infinity if there exists a function N(t, x) such that:

H(t, x) =1

2(A∞(t)x, x) +N(t, x) (32)

∀t , N(t, x) is convex with respect to x (33)

N(t, x) ≥ n (‖x‖) with n(s)s−1 → +∞ as s→ +∞ (34)

∃c ∈ IR : H(t, x) ≤1

2(B∞(t)x, x) + c ∀x . (35)

If A∞(t) = a∞I and B∞(t) = b∞I, with a∞ ≤ b∞ ∈ IR, we shall say thatH is (a∞, b∞)-subquadratic at infinity. As an example, the function ‖x‖α, with1 ≤ α < 2, is (0, ε)-subquadratic at infinity for every ε > 0. Similarly, theHamiltonian

H(t, x) =1

2k ‖k‖2 + ‖x‖α (36)

is (k, k + ε)-subquadratic for every ε > 0. Note that, if k < 0, it is not convex.

Page 18: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

6

Notes and Comments. The first results on subharmonics were obtained by Ra-binowitz in [5], who showed the existence of infinitely many subharmonics bothin the subquadratic and superquadratic case, with suitable growth conditionson H ′. Again the duality approach enabled Clarke and Ekeland in [2] to treatthe same problem in the convex-subquadratic case, with growth conditions onH only.

Recently, Michalek and Tarantello (see [3] and [4]) have obtained lower boundon the number of subharmonics of period kT , based on symmetry considerationsand on pinching estimates, as in Sect. 5.2 of this article.

References

1. Clarke, F., Ekeland, I.: Nonlinear oscillations and boundary-value problems forHamiltonian systems. Arch. Rat. Mech. Anal. 78, 315–333 (1982)

2. Clarke, F., Ekeland, I.: Solutions periodiques, du periode donnee, des equationshamiltoniennes. Note CRAS Paris 287, 1013–1015 (1978)

3. Michalek, R., Tarantello, G.: Subharmonic solutions with prescribed minimal periodfor nonautonomous Hamiltonian systems. J. Diff. Eq. 72, 28–55 (1988)

4. Tarantello, G.: Subharmonic solutions for Hamiltonian systems via a ZZp pseudoin-dex theory. Annali di Matematica Pura (to appear)

5. Rabinowitz, P.: On subharmonic solutions of a Hamiltonian system. Comm. PureAppl. Math. 33, 609–633 (1980)

Page 19: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

Hamiltonian Mechanics2

Ivar Ekeland1 and Roger Temam2

1 Princeton University, Princeton NJ 08544, USA2 Universite de Paris-Sud, Laboratoire d’Analyse Numerique, Batiment 425,

F-91405 Orsay Cedex, France

Abstract. The abstract should summarize the contents of the paperusing at least 70 and at most 150 words. It will be set in 9-point fontsize and be inset 1.0 cm from the right and left margins. There will betwo blank lines before and after the Abstract. . . .

Keywords: graph transformations, convex geometry, lattice computa-tions, convex polygons, triangulations, discrete geometry

1 Fixed-Period Problems: The Sublinear Case

With this chapter, the preliminaries are over, and we begin the search for periodicsolutions to Hamiltonian systems. All this will be done in the convex case; thatis, we shall study the boundary-value problem

x = JH ′(t, x)

x(0) = x(T )

with H(t, ·) a convex function of x, going to +∞ when ‖x‖ → ∞.

1.1 Autonomous Systems

In this section, we will consider the case when the Hamiltonian H(x) is au-tonomous. For the sake of simplicity, we shall also assume that it is C1.

We shall first consider the question of nontriviality, within the general frame-work of (A∞, B∞)-subquadratic Hamiltonians. In the second subsection, we shalllook into the special case when H is (0, b∞)-subquadratic, and we shall try toderive additional information.

The General Case: Nontriviality. We assume that H is (A∞, B∞)-sub-quadratic at infinity, for some constant symmetric matrices A∞ and B∞, withB∞ −A∞ positive definite. Set:

γ : = smallest eigenvalue of B∞ −A∞ (1)

λ : = largest negative eigenvalue of Jd

dt+A∞ . (2)

Page 20: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

8

Theorem 21 tells us that if λ+ γ < 0, the boundary-value problem:

x = JH ′(x)x(0) = x(T )

(3)

has at least one solution x, which is found by minimizing the dual action func-tional:

ψ(u) =

∫ T

o

[1

2

(Λ−1

o u, u)+N∗(−u)

]dt (4)

on the range of Λ, which is a subspace R(Λ)2L with finite codimension. Here

N(x) := H(x)−1

2(A∞x, x) (5)

is a convex function, and

N(x) ≤1

2((B∞ −A∞)x, x) + c ∀x . (6)

Proposition 1. Assume H ′(0) = 0 and H(0) = 0. Set:

δ := lim infx→0

2N(x) ‖x‖−2. (7)

If γ < −λ < δ, the solution u is non-zero:

x(t) 6= 0 ∀t . (8)

Proof. Condition (7) means that, for every δ′ > δ, there is some ε > 0 such that

‖x‖ ≤ ε⇒ N(x) ≤δ′

2‖x‖2 . (9)

It is an exercise in convex analysis, into which we shall not go, to show thatthis implies that there is an η > 0 such that

f ‖x‖ ≤ η ⇒ N∗(y) ≤1

2δ′‖y‖2 . (10)

Fig. 1. This is the caption of the figure displaying a white eagle and a white horse ona snow field

Page 21: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

9

Since u1 is a smooth function, we will have ‖hu1‖∞ ≤ η for h small enough,and inequality (10) will hold, yielding thereby:

ψ(hu1) ≤h2

2

1

λ‖u1‖

2

2+h2

2

1

δ′‖u1‖

2. (11)

If we choose δ′ close enough to δ, the quantity(1

λ + 1

δ′

)will be negative, and

we end up withψ(hu1) < 0 for h 6= 0 small . (12)

On the other hand, we check directly that ψ(0) = 0. This shows that 0 cannotbe a minimizer of ψ, not even a local one. So u 6= 0 and u 6= Λ−1

o (0) = 0. ⊓⊔

Corollary 1. Assume H is C2 and (a∞, b∞)-subquadratic at infinity. Let ξ1,. . . , ξN be the equilibria, that is, the solutions of H ′(ξ) = 0. Denote by ωk thesmallest eigenvalue of H ′′ (ξk), and set:

ω := Min ω1, . . . , ωk . (13)

If:T

2πb∞ < −E

[−T

2πa∞

]<

T

2πω (14)

then minimization of ψ yields a non-constant T -periodic solution x.

We recall once more that by the integer part E[α] of α ∈ IR, we mean thea ∈ ZZ such that a < α ≤ a + 1. For instance, if we take a∞ = 0, Corollary 2tells us that x exists and is non-constant provided that:

T

2πb∞ < 1 <

T

2π(15)

or

T ∈

(2π

ω,2π

b∞

). (16)

Proof. The spectrum of Λ is 2πT ZZ + a∞. The largest negative eigenvalue λ is

given by 2πT ko + a∞, where

Tko + a∞ < 0 ≤

T(ko + 1) + a∞ . (17)

Hence:

ko = E

[−T

2πa∞

]. (18)

The condition γ < −λ < δ now becomes:

b∞ − a∞ < −2π

Tko − a∞ < ω − a∞ (19)

which is precisely condition (14). ⊓⊔

Page 22: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

10

Lemma 1. Assume that H is C2 on IR2n\0 and that H ′′(x) is non-degeneratefor any x 6= 0. Then any local minimizer x of ψ has minimal period T .

Proof. We know that x, or x + ξ for some constant ξ ∈ IR2n, is a T -periodicsolution of the Hamiltonian system:

x = JH ′(x) . (20)

There is no loss of generality in taking ξ = 0. So ψ(x) ≥ ψ(x) for all x insome neighbourhood of x in W 1,2

(IR/TZZ; IR2n

).

But this index is precisely the index iT (x) of the T -periodic solution x overthe interval (0, T ), as defined in Sect. 2.6. So

iT (x) = 0 . (21)

Now if x has a lower period, T/k say, we would have, by Corollary 31:

iT (x) = ikT/k(x) ≥ kiT/k(x) + k − 1 ≥ k − 1 ≥ 1 . (22)

This would contradict (21), and thus cannot happen. ⊓⊔

Notes and Comments. The results in this section are a refined version of 1980;the minimality result of Proposition 14 was the first of its kind.

To understand the nontriviality conditions, such as the one in formula (16),one may think of a one-parameter family xT , T ∈

(2πω−1, 2πb−1

)of periodic

solutions, xT (0) = xT (T ), with xT going away to infinity when T → 2πω−1,which is the period of the linearized system at 0.

Table 1. This is the example table taken out of The TEXbook, p. 246

Year World population

8000 B.C. 5,000,00050 A.D. 200,000,000

1650 A.D. 500,000,0001945 A.D. 2,300,000,0001980 A.D. 4,400,000,000

Theorem 1 (Ghoussoub-Preiss). Assume H(t, x) is (0, ε)-subquadratic atinfinity for all ε > 0, and T -periodic in t

H(t, ·) is convex ∀t (23)

H(·, x) is T−periodic ∀x (24)

H(t, x) ≥ n (‖x‖) with n(s)s−1 →∞ as s→∞ (25)

Page 23: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

11

∀ε > 0 , ∃c : H(t, x) ≤ε

2‖x‖2 + c . (26)

Assume also that H is C2, and H ′′(t, x) is positive definite everywhere. Thenthere is a sequence xk, k ∈ IN, of kT -periodic solutions of the system

x = JH ′(t, x) (27)

such that, for every k ∈ IN, there is some po ∈ IN with:

p ≥ po ⇒ xpk 6= xk . (28)

⊓⊔

Example 1 (External forcing). Consider the system:

x = JH ′(x) + f(t) (29)

where the Hamiltonian H is (0, b∞)-subquadratic, and the forcing term is adistribution on the circle:

f =d

dtF + fo with F ∈ L2

(IR/TZZ; IR2n

), (30)

where fo := T−1∫ T

o f(t)dt. For instance,

f(t) =∑

k∈IN

δkξ , (31)

where δk is the Dirac mass at t = k and ξ ∈ IR2n is a constant, fits the pre-scription. This means that the system x = JH ′(x) is being excited by a seriesof identical shocks at interval T .

Definition 1. Let A∞(t) and B∞(t) be symmetric operators in IR2n, dependingcontinuously on t ∈ [0, T ], such that A∞(t) ≤ B∞(t) for all t.

A Borelian function H : [0, T ]× IR2n → IR is called (A∞, B∞)-subquadraticat infinity if there exists a function N(t, x) such that:

H(t, x) =1

2(A∞(t)x, x) +N(t, x) (32)

∀t , N(t, x) is convex with respect to x (33)

N(t, x) ≥ n (‖x‖) with n(s)s−1 → +∞ as s→ +∞ (34)

∃c ∈ IR : H(t, x) ≤1

2(B∞(t)x, x) + c ∀x . (35)

If A∞(t) = a∞I and B∞(t) = b∞I, with a∞ ≤ b∞ ∈ IR, we shall say thatH is (a∞, b∞)-subquadratic at infinity. As an example, the function ‖x‖α, with1 ≤ α < 2, is (0, ε)-subquadratic at infinity for every ε > 0. Similarly, theHamiltonian

H(t, x) =1

2k ‖k‖2 + ‖x‖α (36)

is (k, k + ε)-subquadratic for every ε > 0. Note that, if k < 0, it is not convex.

Page 24: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

12

Notes and Comments. The first results on subharmonics were obtained by Rabi-nowitz in 1985, who showed the existence of infinitely many subharmonics bothin the subquadratic and superquadratic case, with suitable growth conditionson H ′. Again the duality approach enabled Clarke and Ekeland in 1981 to treatthe same problem in the convex-subquadratic case, with growth conditions onH only.

Recently, Michalek and Tarantello (see Michalek, R., Tarantello, G. 1982 andTarantello, G. 1983) have obtained lower bound on the number of subharmonicsof period kT , based on symmetry considerations and on pinching estimates, asin Sect. 5.2 of this article.

References

Clarke, F., Ekeland, I.: Nonlinear oscillations and boundary-value problems for Hamil-tonian systems. Arch. Rat. Mech. Anal. 78, 315–333 (1982)

Clarke, F., Ekeland, I.: Solutions periodiques, du periode donnee, des equations hamil-toniennes. Note CRAS Paris 287, 1013–1015 (1978)

Michalek, R., Tarantello, G.: Subharmonic solutions with prescribed minimal periodfor nonautonomous Hamiltonian systems. J. Diff. Eq. 72, 28–55 (1988)

Tarantello, G.: Subharmonic solutions for Hamiltonian systems via a ZZp pseudoindextheory. Annali di Matematica Pura (to appear)

Rabinowitz, P.: On subharmonic solutions of a Hamiltonian system. Comm. Pure Appl.Math. 33, 609–633 (1980)

Page 25: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

Author Index

Abt I. 7Ahmed T. 3Andreev V. 24Andrieu B. 27Arpagaus M. 34

Babaev A. 25Barwolff A. 33Ban J. 17Baranov P. 24Barrelet E. 28Bartel W. 11Bassler U. 28Beck H.P. 35Behrend H.-J. 11Berger Ch. 1Bergstein H. 1Bernardi G. 28Bernet R. 34Besancon M. 9Biddulph P. 22Binder E. 11Bischoff A. 33Blobel V. 13Borras K. 8Bosetti P.C. 2Boudry V. 27Brasse F. 11Braun U. 2Braunschweig A. 1Brisson V. 26Bungener L. 13Burger J. 11Busser F.W. 13Buniatian A. 11,37Buschhorn G. 25

Campbell A.J. 1Carli T. 25Charles F. 28Clarke D. 5Clegg A.B. 18Colombo M. 8Courau A. 26Coutures Ch. 9

Cozzika G. 9Criegee L. 11Cvach J. 27

Dagoret S. 28Dainton J.B. 19Dann A.W.E. 22Dau W.D. 16Deffur E. 11Delcourt B. 26Buono Del A. 28Devel M. 26De Roeck A. 11Dingus P. 27Dollfus C. 35Dreis H.B. 2Drescher A. 8Dullmann D. 13Dunger O. 13Duhm H. 12

Ebbinghaus R. 8Eberle M. 12Ebert J. 32Ebert T.R. 19Efremenko V. 23Egli S. 35Eichenberger S. 35Eichler R. 34Eisenhandler E. 20Ellis N.N. 3Ellison R.J. 22Elsen E. 11Evrard E. 4

Favart L. 4Feeken D. 13Felst R. 11Feltesse A. 9Fensome I.F. 3Ferrarotto F. 31Flamm K. 11Flauger W. 11Flieser M. 25Flugge G. 2

Page 26: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

14

Fomenko A. 24Fominykh B. 23Formanek J. 30Foster J.M. 22Franke G. 11Fretwurst E. 12

Gabathuler E. 19Gamerdinger K. 25Garvey J. 3Gayler J. 11Gellrich A. 13Gennis M. 11Genzel H. 1Godfrey L. 7Goerlach U. 11Goerlich L. 6Gogitidze N. 24Goodall A.M. 19Gorelov I. 23Goritchev P. 23Grab C. 34Grassler R. 2Greenshaw T. 19Greif H. 25Grindhammer G. 25

Haack J. 33Haidt D. 11Hamon O. 28Handschuh D. 11Hanlon E.M. 18Hapke M. 11Harjes J. 11Haydar R. 26Haynes W.J. 5Hedberg V. 21Heinzelmann G. 13Henderson R.C.W. 18Henschel H. 33Herynek I. 29Hildesheim W. 11Hill P. 11Hilton C.D. 22Hoeger K.C. 22Huet Ph. 4Hufnagel H. 14Huot N. 28

Itterbeck H. 1

Jabiol M.-A. 9Jacholkowska A. 26Jacobsson C. 21Jansen T. 11Jonsson L. 21Johannsen A. 13Johnson D.P. 4Jung H. 2

Kalmus P.I.P. 20Kasarian S. 11Kaschowitz R. 2Kathage U. 16Kaufmann H. 33Kenyon I.R. 3Kermiche S. 26Kiesling C. 25Klein M. 33Kleinwort C. 13Knies G. 11Ko W. 7Kohler T. 1Kolanoski H. 8Kole F. 7Kolya S.D. 22Korbel V. 11Korn M. 8Kostka P. 33Kotelnikov S.K. 24Krehbiel H. 11Krucker D. 2Kruger U. 11Kubenka J.P. 25Kuhlen M. 25Kurca T. 17Kurzhofer J. 8Kuznik B. 32

Lamarche F. 27Lander R. 7Landon M.P.J. 20Lange W. 33Lanius P. 25Laporte J.F. 9Lebedev A. 24Leuschner A. 11Levonian S. 11,24Lewin D. 11Ley Ch. 2Lindner A. 8

Page 27: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

15

Lindstrom G. 12Linsel F. 11Lipinski J. 13Loch P. 11Lohmander H. 21Lopez G.C. 20

Magnussen N. 32Mani S. 7Marage P. 4Marshall R. 22Martens J. 32Martin A.@ 19Martyn H.-U. 1Martyniak J. 6Masson S. 2Mavroidis A. 20McMahon S.J. 19Mehta A. 22Meier K. 15Mercer D. 22Merz T. 11Meyer C.A. 35Meyer H. 32Meyer J. 11Mikocki S. 6,26Milone V. 31Moreau F. 27Moreels J. 4Morris J.V. 5Muller K. 35Murray S.A. 22

Nagovizin V. 23Naroska B. 13Naumann Th. 33Newton D. 18Neyret D. 28Nguyen A. 28Niebergall F. 13Nisius R. 1Nowak G. 6Nyberg M. 21

Oberlack H. 25Obrock U. 8Olsson J.E. 11Ould-Saada F. 13

Pascaud C. 26

Patel G.D. 19Peppel E. 11Phillips H.T. 3Phillips J.P. 22Pichler Ch. 12Pilgram W. 2Pitzl D. 34Prell S. 11Prosi R. 11

Radel G. 11Raupach F. 1Rauschnabel K. 8Reinshagen S. 11Ribarics P. 25Riech V. 12Riedlberger J. 34Rietz M. 2Robertson S.M. 3Robmann P. 35Roosen R. 4Royon C. 9Rudowicz M. 25Rusakov S. 24Rybicki K. 6

Sahlmann N. 2Sanchez E. 25Savitsky M. 11Schacht P. 25Schleper P. 14von Schlippe W. 20Schmidt D. 32Schmitz W. 2Schoning A. 11Schroder V. 11Schulz M. 11Schwab B. 14Schwind A. 33Seehausen U. 13Sell R. 11Semenov A. 23Shekelyan V. 23Shooshtari H. 25Shtarkov L.N. 24Siegmon G. 16Siewert U. 16Skillicorn I.O. 10Smirnov P. 24Smith J.R. 7

Page 28: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

16

Smolik L. 11Spitzer H. 13Staroba P. 29Steenbock M. 13Steffen P. 11Stella B. 31Stephens K. 22Stosslein U. 33Strachota J. 11Straumann U. 35Struczinski W. 2

Taylor R.E. 36,26Tchernyshov V. 23Thiebaux C. 27Thompson G. 20Truol P. 35Turnau J. 6

Urban L. 25Usik A. 24

Valkarova A. 30Vallee C. 28Van Esch P. 4Vartapetian A. 11

Vazdik Y. 24Verrecchia P. 9Vick R. 13Vogel E. 1

Wacker K. 8Walther A. 8Weber G. 13Wegner A. 11Wellisch H.P. 25West L.R. 3Willard S. 7Winde M. 33Winter G.-G. 11Wolff Th. 34Wright A.E. 22Wulff N. 11

Yiou T.P. 28

Zacek J. 30Zeitnitz C. 12Ziaeepour H. 26Zimmer M. 11Zimmermann W. 11

Page 29: Universit´e Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ... · arXiv:1306.1083v1 [cs.CV] 5 Jun 2013 Discriminative Parameter Estimation for Random Walks Segmentation: TechnicalReport

Subject Index

Absorption 327Absorption of radiation 289–292, 299,300Actinides 244Aharonov-Bohm effect 142–146Angular momentum 101–112– algebraic treatment 391–396Angular momentum addition 185–193Angular momentum commutation rela-tions 101Angular momentum quantization 9–10,104–106Angular momentum states 107, 321,391–396Antiquark 83α-rays 101–103Atomic theory 8–10, 219–249, 327Average value(see also Expectation value) 15–16, 25, 34,37, 357

Baker-Hausdorff formula 23Balmer formula 8Balmer series 125Baryon 220, 224Basis 98Basis system 164, 376Bell inequality 379–381, 382Bessel functions 201, 313, 337– spherical 304–306, 309, 313–314, 322Bound state 73–74, 78–79, 116–118, 202,267, 273, 306, 348, 351Boundary conditions 59, 70Bra 159Breit-Wigner formula 80, 84, 332

Brillouin-Wigner perturbation theory203

Cathode rays 8Causality 357–359Center-of-mass frame 232, 274, 338Central potential 113–135, 303–314Centrifugal potential 115–116, 323Characteristic function 33Clebsch-Gordan coefficients 191–193Cold emission 88Combination principle, Ritz’s 124Commutation relations 27, 44, 353, 391Commutator 21–22, 27, 44, 344Compatibility of measurements 99Complete orthonormal set 31, 40, 160,360Complete orthonormal system, seeComplete orthonormal setComplete set of observables, see Completeset of operators

Eigenfunction 34, 46, 344–346– radial 321– – calculation 322–324EPR argument 377–378Exchange term 228, 231, 237, 241, 268,272

f -sum rule 302Fermi energy 223

H+

2 molecule 26Half-life 65Holzwarth energies 68