FLaME: Fast Lightweight Mesh Estimation Using Variational...

9
FLaME: Fast Lightweight Mesh Estimation using Variational Smoothing on Delaunay Graphs W. Nicholas Greene Nicholas Roy Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology {wng,nickroy}@csail.mit.edu Figure 1: Fast Lightweight Mesh Estimation: FLaME generates 3D mesh reconstructions from monocular images in real-time onboard computationally constrained platforms. The key to the approach is a graph-based variational optimization framework that allows for the mesh to be efficiently smoothed and refined. The top row of images (from left to right) show the meshes computed onboard a small autonomous robot flying at 3.5 meters-per-second as it avoids a tree. The bottom row shows the current frame (left), the collision-free plan in pink (middle), and the dense depthmap generated from the mesh (right) for each timestep along the approach. Abstract We propose a lightweight method for dense online monocular depth estimation capable of reconstructing 3D meshes on computationally constrained platforms. Our main contribution is to pose the reconstruction problem as a non-local variational optimization over a time-varying De- launay graph of the scene geometry, which allows for an efficient, keyframeless approach to depth estimation. The graph can be tuned to favor reconstruction quality or speed and is continuously smoothed and augmented as the camera explores the scene. Unlike keyframe-based approaches, the optimized surface is always available at the current pose, which is necessary for low-latency obstacle avoidance. FLaME (Fast Lightweight Mesh Estimation) can gener- ate mesh reconstructions at upwards of 230 Hz using less than one Intel i7 CPU core, which enables operation on size, weight, and power-constrained platforms. We present results from both benchmark datasets and experiments run- ning FLaME in-the-loop onboard a small flying quadrotor. 1. Introduction Estimating dense 3D geometry from 2D images taken from a moving monocular camera is a fundamental prob- lem in computer vision with a wide range of applications in robotics and augmented reality (AR). Though the visual tracking component of monocular simultaneous localiza- tion and mapping (SLAM) has reached a certain level of maturity over the last ten years [13, 9, 12, 10, 8, 14, 15], ef- ficiently reconstructing dense environment representations for autonomous navigation or AR on small size-weight-and- power (SWaP) constrained platforms (such as mobile robots and smartphones) is still an active research front. Current approaches either transmit information to a groundstation for processing [28, 1, 5], sacrifice density [7, 6, 10, 14], run at significantly reduced framerates [21, 22], or limit the reconstruction volume to small scenes [16] or to past keyframes [11], all of which restrict their utility in practice, especially for mobile robot navigation. In this paper, we propose a novel monocular depth estimation pipeline that 4686

Transcript of FLaME: Fast Lightweight Mesh Estimation Using Variational...

Page 1: FLaME: Fast Lightweight Mesh Estimation Using Variational ...openaccess.thecvf.com/content_ICCV_2017/papers/...Figure 2: Second Order Smoothing: FLaME estimates dense inverse depth

FLaME: Fast Lightweight Mesh Estimation using Variational Smoothing on

Delaunay Graphs

W. Nicholas Greene Nicholas Roy

Computer Science and Artificial Intelligence Laboratory

Massachusetts Institute of Technology

wng,[email protected]

Figure 1: Fast Lightweight Mesh Estimation: FLaME generates 3D mesh reconstructions from monocular images in real-time onboard

computationally constrained platforms. The key to the approach is a graph-based variational optimization framework that allows for the

mesh to be efficiently smoothed and refined. The top row of images (from left to right) show the meshes computed onboard a small

autonomous robot flying at 3.5 meters-per-second as it avoids a tree. The bottom row shows the current frame (left), the collision-free plan

in pink (middle), and the dense depthmap generated from the mesh (right) for each timestep along the approach.

Abstract

We propose a lightweight method for dense online

monocular depth estimation capable of reconstructing 3D

meshes on computationally constrained platforms. Our

main contribution is to pose the reconstruction problem as a

non-local variational optimization over a time-varying De-

launay graph of the scene geometry, which allows for an

efficient, keyframeless approach to depth estimation. The

graph can be tuned to favor reconstruction quality or speed

and is continuously smoothed and augmented as the camera

explores the scene. Unlike keyframe-based approaches, the

optimized surface is always available at the current pose,

which is necessary for low-latency obstacle avoidance.

FLaME (Fast Lightweight Mesh Estimation) can gener-

ate mesh reconstructions at upwards of 230 Hz using less

than one Intel i7 CPU core, which enables operation on

size, weight, and power-constrained platforms. We present

results from both benchmark datasets and experiments run-

ning FLaME in-the-loop onboard a small flying quadrotor.

1. Introduction

Estimating dense 3D geometry from 2D images taken

from a moving monocular camera is a fundamental prob-

lem in computer vision with a wide range of applications

in robotics and augmented reality (AR). Though the visual

tracking component of monocular simultaneous localiza-

tion and mapping (SLAM) has reached a certain level of

maturity over the last ten years [13, 9, 12, 10, 8, 14, 15], ef-

ficiently reconstructing dense environment representations

for autonomous navigation or AR on small size-weight-and-

power (SWaP) constrained platforms (such as mobile robots

and smartphones) is still an active research front. Current

approaches either transmit information to a groundstation

for processing [28, 1, 5], sacrifice density [7, 6, 10, 14],

run at significantly reduced framerates [21, 22], or limit

the reconstruction volume to small scenes [16] or to past

keyframes [11], all of which restrict their utility in practice,

especially for mobile robot navigation. In this paper, we

propose a novel monocular depth estimation pipeline that

4686

Page 2: FLaME: Fast Lightweight Mesh Estimation Using Variational ...openaccess.thecvf.com/content_ICCV_2017/papers/...Figure 2: Second Order Smoothing: FLaME estimates dense inverse depth

Figure 2: Second Order Smoothing: FLaME estimates dense inverse depth meshes by optimizing a non-local, second-order variational

smoothness cost over a semi-sparsely sampled Delaunay Graph. Minimizing this cost function promotes piecewise planar structure as

shown above on benchmark data collected from a handheld Kinect [27]. From left to right, each column shows the input RGB image, the

Kinect depthmap, the generated FLaME depthmap, the FLaME mesh in the current view, and the FLaME mesh projected into 3D. Note

the smooth planar reconstructions that are enabled by the approach and the accuracy of the depthmaps relative to those from the Kinect.

enables dense geometry to be efficiently computed at up-

wards of 230 Hz using less than one Intel i7 CPU core —

a small enough footprint to fit completely onboard an au-

tonomous micro-aerial vehicle (MAV), with sufficient ac-

curacy to enable closed-loop motion planning using the re-

constructions.

Our key insight is to recognize that for many environ-

ments and applications, the “every-pixel” methods that are

currently in vogue for dense depth estimation massively

oversample scenes relative to (a) their true geometric com-

plexity, (b) the observable geometric complexity given the

available texture and baseline, and (c) the additional compu-

tational effort required to spatially regularize (i.e. smooth)

noisy and outlier-prone depth estimates. Oversampling the

scene requires more computation for stereo matching, an

unnecessary cost for pixels whose depth might be either

redundant given the geometry or weakly observable given

the environment texture or camera motion, and significantly

increases the sophistication and runtime of regularization

needed to produce plausible reconstructions (which often

requires what amounts to batch optimizations over select

keyframes in the past while the camera continues explor-

ing).

Instead of a dense every-pixel approach, we propose

a novel alternative that we call FLaME (Fast Lightweight

Mesh Estimation) that directly estimates a triangular mesh

of the scene (similar to the stereo work of [18]) and is ad-

vantageous for several reasons. First, meshes are more com-

pact, efficient representations of the geometry and there-

fore require fewer depth estimates to encode the scene for

a given level of detail. Second, by interpreting the mesh as

a graph we show that we can exploit its connectivity struc-

ture to apply (and accelerate) state-of-the-art second-order

variational regularization techniques that otherwise require

GPUs to run online. Third, by reformulating the regular-

ization objective in terms of the vertices and edges of this

graph, we allow the smoothing optimization to be both in-

cremental (in that new terms can be trivially added and re-

moved as the graph is modified over time) and keyframeless

(in that the solution can be easily propagated in time with-

out restarting the optimization).

We show significant improvements over existing ap-

proaches on benchmark data in terms of runtime, CPU load,

density, and accuracy, and present results from flight exper-

iments running FLaME in-the-loop onboard a small MAV

flying at up to 3.5 meters-per-second (see Figure 1 and 2).

2. Preliminaries

We first detail the notation used in the rest of the pa-

per (Section 2.1) and give a brief overview of variational

smoothing methods (Section 2.2) before describing the ap-

proach in detail (Section 3).

2.1. Notation

We represent images as scalar functions defined over the

pixel domain Ω ⊂ R2, such that Ik : Ω → R denotes the

image taken at time index k. We represent the pose of the

camera at time k with respect to the world frame W by the

transform TWk ∈ SE(3).

We let K ∈ R3×3 denote the intrinsic camera ma-

trix. Vectors in homogeneous coordinates are given by

x =[

xT 1]T

. The functions π(x, y, z) = (x/z, y/z) and

π−1(u, d) = K−1(d · u) denote the perspective projection

function and its inverse for pixel u ∈ Ω given depth d, re-

spectively. The projection of a point pW ∈ R3 in the world

into the camera at time k is therefore u = π(KTkW pW )

(the de-homogenization is implied). Finally, the inverse

depthmap at time k is given by the function ξk : Ω→ R+.

4687

Page 3: FLaME: Fast Lightweight Mesh Estimation Using Variational ...openaccess.thecvf.com/content_ICCV_2017/papers/...Figure 2: Second Order Smoothing: FLaME estimates dense inverse depth

2.2. Variational Smoothing

In the continuous setting, variational methods seek to

minimize objective functionals of the following form:

E(f) = Esmooth(f) + λEdata(f), (1)

for scalar function f : X → R and various choices of

smoothness term Esmooth, data fidelity term Edata, and

scalar λ > 0 that controls the balance of data-fitting ver-

sus smoothness.

When a noisy signal z : X → R is observed, a common

choice of Edata(f) is an outlier-robust L1 norm:

Edata(f) =

X

|f(x)− z(x)| dx. (2)

One powerful choice of Esmooth is the second-order To-

tal Generalized Variation (TGV2) semi-norm of [2]:

TGV2(f) = minw(x)∈R2

α

X

|∇f(x)−w(x)| dx +

β

X

|∇w(x)| dx,

(3)

which introduces auxiliary function w : X → R2 and

weights α, β ≥ 0. This functional penalizes discontinu-

ities in the first two derivatives of f and promotes piecewise

affine solutions. The contributions of the first and second

derivatives to the overall cost are controlled by α and β.

It is important to note that this functional only incorpo-

rates local information through the gradient operator. A

non-local extension (NLTGV2) was therefore developed in

[20] so that information beyond immediately neighboring

pixels could influence the objective:

NLTGV2(f) =

minw(x)∈R2

X

X

α(x,y) |f(x)− f(y)−

〈w(x),x− y〉| dxdy+∫

X

X

β(x,y) |w1(x)− w1(y)| dxdy+

X

X

β(x,y) |w2(x)− w2(y)| dxdy,

(4)

for w(x) = (w1(x), w2(x)) and weight functions

α(x,y) ≥ 0 and β(x,y) ≥ 0, which encode the weighted,

non-local gradients.

The work of Pinies et al. showed that when f is inter-

preted as an inverse depthmap ξ, smoothing with NLTGV2

leads not only to piecewise affine solutions over X = Ω,

but also over R3 when ξ is projected into 3D using π−1 (a

non-trivial result) [19].

Although the choices of Edata and Esmooth outlined

above are not differentiable, they are convex and can thus

Algorithm 1 Method of Chambolle and Pock [4]

// Choose σ, τ > 0, θ ∈ [0, 1].while not converged do

qk+1 = proxF∗(qk + σDxk)xk+1 = proxG(x

k − τD∗qk+1)xk+1 = xk+1 + θ(xk+1 − xk)

be efficiently minimized using convex optimization tech-

niques. One popular optimization scheme is the first-order,

primal-dual method of Chambolle and Pock [4], which

solves optimization problems of the following form:

minx∈Rn

F (Dx) +G(x), (5)

where F : Rm → R+ and G : Rn → R+ are convex and

D : Rn → R

m is a linear operator that usually encodes

discrete gradients. The essence of the Chambolle and Pock

approach is to represent F in terms of its convex conjugate

and dual variable q ∈ Rm, resulting in the following sad-

dlepoint problem:

minx∈Rn

maxq∈Rm

〈Dx,q〉 − F ∗(q) +G(x). (6)

Optimal values for primal variable x and dual variable q are

then obtained by repeated application of the proximal oper-

ator that generalizes gradient descent to non-differentiable

functions [17] (see Algorithm 1).

3. Algorithm Overview

FLaME directly estimates an inverse depth mesh of

the environment that efficiently encodes the scene geome-

try and allows for efficient, incremental, and keyframeless

second-order variational regularization to recover smooth

surfaces. Given an image sequence Ik from a moving cam-

era with known pose TWk , our task entails:

• Estimating the inverse depth for a set of sampled pixels

(Section 3.1)

• Constructing the mesh using the sampled points (Sec-

tion 3.2)

• Defining a suitable smoothness cost over the graph in-

duced by the mesh (Section 3.3)

• Minimizing the smoothness cost (Section 3.4)

• Projecting the mesh from frame to frame (Section 3.5)

See Figure 3 for a block diagram of the data flow.

3.1. Feature Inverse Depth Estimation

We first estimate the inverse depth for a set “trackable”

pixels (or features) sampled over the image domain that will

serve as candidate vertices to insert into our mesh. Let Fk

denote the current set of features. Each feature f ∈ Fk is

detected at timestep ft and defined at location fu ∈ Ω in

the image Ift at pose TWft

.

4688

Page 4: FLaME: Fast Lightweight Mesh Estimation Using Variational ...openaccess.thecvf.com/content_ICCV_2017/papers/...Figure 2: Second Order Smoothing: FLaME estimates dense inverse depth

Figure 3: FLaME Overview: FLaME operates on image streams

with known poses. Inverse depth is estimated for a set of fea-

tures using the fast, filtering-based approach of [8]. When the in-

verse depth estimate for a given feature converges, it is inserted

as a new vertex in a graph defined in the current frame and com-

puted through Delaunay triangulations. This Delaunay graph is

then used to efficiently smooth away noise in the inverse depth

values and promote piecewise planar structure by minimizing a

second-order variational cost defined over the graph.

We select features by dividing Ω into grid cells of size

2L×2L based on a user-set detail level L (see Figure 4) and

selecting a pixel in each cell as a new feature if certain crite-

ria are met. First, we do not select features in cells that con-

tain the projection of another feature (this ensures we main-

tain a certain desired detail level). If no other feature in Fk

falls into a given grid cell, then for each pixel u in the grid

cell we compute a trackability score s(u) =∣

∣∇Ik(u)T eu

based on the image gradient ∇Ik(u) and epipolar direction

eu induced by the previous frame. This score is a simple

metric for determining pixels that will be easy to match in

future frames given the camera motion. If the pixel in the

window with the maximum score passes a threshold, we add

it as a feature to Fk.

Next, we estimate an inverse depth mean ξf and variance

σ2f for each f ∈ Fk by matching a reference patch of pix-

els around uf in future frames using a direct search along

the epipolar line. For a given match, we compute an inverse

depth measurement with mean ξz and variance σ2z accord-

ing to the noise model in [8] and fuse it with the feature’s

current estimate using standard Bayesian fusion:

ξf ←ξfσ

2z + ξzσ

2f

σ2f + σ2

z

, σ2f ←

σ2fσ

2z

σ2f + σ2

x

. (7)

3.2. Mesh Construction

We construct our mesh using the set of featuresF∗k ⊆ Fk

whose inverse depth variance is lower than a threshold

σ2max: F∗

k = f ∈ Fk : σ2f < σ2

max. We project these fea-

tures into the current camera frame Twk and then compute a

2D Delaunay triangulation of the projected pixel locations

using the fast method of [23, 24]. We denote the Delaunay

triangulation by DT (F∗k ) = (Vk, Tk), where Vk is the set

of mesh vertices and Tk is the set of triangles. The Delau-

nay triangulation is optimal in the sense that it maximizes

the minimum angle for each triangle in TkWe denote the feature corresponding to vertex v ∈ Vk

with vf ∈ F∗k and let vu ∈ Ω denote the pixel location of v,

which we initialize with the projection of f into the current

frame k:

vu = π(

KTkftπ−1

(

fu, ξ−1f

))

. (8)

Finally, we assign an inverse depth to each new vertex that

we refer to as vz and initialize it to the feature inverse depth

ξf for corresponding feature f projected into the current

frame. Note that although we perform our triangulation in

2D using the vertex pixel locations, we can project the mesh

to 3D using this inverse depth value. We can also obtain a

dense inverse depthmap ξ : Ω→ R+ by linearly interpolat-

ing the inverse depth values of the mesh vertices.

3.3. Non­Local Second Order Variational Cost

Now equipped with an inverse depth mesh DT (F∗k ), we

formulate our non-local, graph-based variational regularizer

that will efficiently smooth away noise in the mesh and pro-

mote planar structure.

We start with the continuous NLTGV2-L1 variational

cost for a fully dense inverse depthmap ξ : Ω→ R+, which

sets the smoothing term Esmooth(ξ) to NLTGV2 defined in

Equation (4) and the data fidelity term Edata(ξ) to the ro-

bust L1 norm in Equation (2):

E(ξ) = NLTGV2(ξ) + λ

Ω

|ξ(u)− z(u)| du. (9)

Figure 4: Detail vs. Speed: The density of tracked features can

be tuned to favor geometric detail or speed. Here we compare

the depthmaps (bottom row, columns 2 and 3) generated from the

smoothed graph (top row, columns 2-3) to a Kinect depthmap (bot-

tom left) using different settings of detail parameter L (see Sec-

tion 3.1). The input RGB image is shown in the top left and the

depthmaps are colored by depth. Changing L from 4 (column 2) to

5 (column 3) results in nearly a 50 percent speedup (see Table 1).

4689

Page 5: FLaME: Fast Lightweight Mesh Estimation Using Variational ...openaccess.thecvf.com/content_ICCV_2017/papers/...Figure 2: Second Order Smoothing: FLaME estimates dense inverse depth

Here z : Ω → R+ is our raw, unsmoothed inverse

depthmap.

We will approximate this functional over the fully dense

ξ using our inverse depth mesh DT (F∗k ). We first rein-

terpret DT (F∗k ) as a directed Delaunay graph DG(F∗

k ) =(Vk, Ek), with identical vertices Vk and directed, non-

parallel edges Ek generated from the triangle set Tk (the

direction of each edge is arbitrary). For each vertex v ∈ Vk,

we assign a smoothed inverse depth value that we denote

vξ and an auxiliary variable w ∈ R2 such that vw =

(vw1, vw2

). We let vx denote (vξ, vw).The graph version of our L1 data fidelity term is straight-

forward to define in terms ofDG(F∗k ) by replacing the inte-

gral over the image domain Ω with a sum over the vertices

of DG(F∗k ):∫

Ω

|ξ(u)− z(u)| du ≈∑

v∈Vk

|vξ − vz| , (10)

where vz is the inverse depth of feature vf projected into

the current frame.

Discretizing the NLTGV2 smoothing term overDG(F∗k )

simply requires a special setting of the weight functions

α, β : Ω × Ω → R. In the non-local, variational frame-

work, these functions control the influence of inverse depth

values over their spatial neighbors and thus should be de-

fined in terms of the edge set Ek of DG(F∗k ).

For each edge e ∈ Ek, we denote the associated vertices

as vi, vj ∈ Vk (note again that the edges are directed from ito j). We then assign weights eα, eβ ≥ 0 to each edge and

set the functions α and β to the following:

α(u,v) = eαδ(u− viu,v − vj

u) for e ∈ E (11)

β(u,v) = eβδ(u− viu,v − vj

u) for e ∈ E . (12)

Setting α, β in terms of delta functions that encode the con-

nectivity in the graph (i.e. the edges Ek) reduces the double

integral over Ω in Equation 4 to a summation over Ek:

NLTGV2(ξ) ≈∑

e∈Ek

∣viξ − vjξ − 〈v

iw, vi

u− vj

u〉∣

∣+

e∈Ek

eβ∣

∣viw1− vjw1

∣ +

e∈Ek

eβ∣

∣viw2− vjw2

=∑

e∈Ek

∣De(vix, vj

x)∣

1.

(13)

Here De is a linear operator that acts on the vertices corre-

sponding to edge e:

De(vix, vj

x) =

(

viξ − vjξ − 〈viw, vi

u− vj

u〉)

eβ(

viw1− vjw1

)

eβ(

viw2− vjw2

)

. (14)

The final form of our graph-based NLTGV2 − L1 cost

functional is now

E(DG(F∗k )) =

e∈Ek

∣De(vix, vj

x)∣

1+ λ

v∈Vk

|vξ − vz| .(15)

Note that by defining the NLTGV2-L1 variational cost in

terms of the DG(F∗k ), we can trivially augment and refine

the objective by simply adding new vertices and edges to the

graph, just as the mesh DT (F∗k ) is augmented and refined

using incremental triangulations.

3.4. Graph Optimization

Having reformulated the NLTGV2-L1 cost in terms of

graph DG(F∗k ), we now apply the optimization method of

Chambolle and Pock [4]. We see the summation over Ekand the summation over Vk in Equation 15 correspond to

F (D(x)) and G(x), respectively, in the Chambolle and

Pock objective in Equation 5 for x = [vx] for v ∈ Vk, and

that we can follow the same optimization approach.

We first generate the saddlepoint problem induced by

this new graph-based cost by re-expressing the L1 norm

corresponding to the sum over edges in terms of its convex

conjugate, which can in turn be composed of the conjugates

of each term in the sum:∣

∣De(vix, vj

x)∣

1= max

eq〈De(v

ix, vj

x), eq〉 − δQ(eq), (16)

where we have assigned a dual variable q ∈ R3 to each

edge, denoted by eq. The indicator term δQ is the conjugate

L∗1 and is defined as

δQ(eq) =

3∑

i=1

δqi(eqi), δq(q) =

0 if |q| ≤ 1

∞, otherwise.

(17)

The NLTGV2-L1 saddlepoint problem can now be written

in terms of DG(F∗k ) as:

minvx

maxeq

e∈Ek

〈De(vix, vj

x), eq〉 − δQ(eq)

+ λ∑

v∈Vk

|vξ − vz| .(18)

To optimize Equation 18, we first perform semi-implicit,

subgradient ascent over eq for each e ∈ Ek:

en+1q

= enq+ σDe(v

ix, vj

x)− σ∂δQ(e

n+1q

) (19)

where σ > 0 is the dual step size and ∂δQ(q) is the subgra-

dient of δQ(q). Moving the en+1q

terms to the left side of

the equation yields:

en+1q

+ σ∂δQ(en+1q

) = enq+ σDe(v

ix, vj

x), (20)

4690

Page 6: FLaME: Fast Lightweight Mesh Estimation Using Variational ...openaccess.thecvf.com/content_ICCV_2017/papers/...Figure 2: Second Order Smoothing: FLaME estimates dense inverse depth

Algorithm 2 NLTGV2 − L1 Graph Optimization

// Choose σ, τ > 0, θ ∈ [0, 1].while not converged do

for each e ∈ Ek do

en+1q

= proxF∗

(

enq+ σDe(v

ix, vj

x))

for each v ∈ Vk do

vn+1x

= proxG

(

vnx− τ

e∈Nin(v)D∗

in(en+1q

)

−τ∑

e∈Nout(v)D∗

out(en+1q

))

vn+1x

= vn+1x

+ θ(

vn+1x− vn

x

)

which we can express in terms of the proximal operator:

en+1q

= proxF∗

(

enq+ σDe(v

ix, vj

x))

. (21)

We next wish to perform semi-implicit subgradient de-

scent over vx for each v ∈ Vk, but some care must be taken

with forming the adjoint operator D∗e . We observe that the

operator De(vix, vj

x) maps two primal vertex variables (cor-

responding to the source and destination vertex) to the dual

space for each edge. The adjoint must therefore map a sin-

gle dual edge variable to the space of of two primal variables

(again corresponding to the source and destination vertex).

Starting from the expression of De in Equation 14, we form

the adjoint as

D∗e(eq) =

eαeq1−eα(e

iu − eju)eq1 + eβeq2

−eα(eiv − ejv)eq1 + eβeq3−eαeq1−eβeq2−eβeq3

(22)

=

[

D∗in(eq)

D∗out(eq)

]

, (23)

where we have partitioned the top three rows D∗e into D∗

in,

which maps eq to the source primal vertex space, and the

bottom three rows of D∗e into D∗

out, which maps eq to the

destination primal vertex space.

The semi-implicit subgradient descent equations for

each vx is then given by

vn+1x

= vnx− τ∂G(vn+1

x)− τ

e∈Nin(v)

D∗in(e

n+1q

)

− τ∑

e∈Nout(v)

D∗out(e

n+1q

),

(24)

where τ > 0 is the primal step size and the incoming and

outgoing edges of v are denoted as Nin(v) and Nout(v),respectively. Solving for vn+1

xin terms of the proximal op-

Figure 5: Spatial Regularization: FLaME minimizes a non-local,

variational smoothness cost defined over a Delaunay graph, which

efficiently generates piecewise planar mesh reconstructions from

noisy inverse depth estimates. The above images show the meshes

produced from raw, unsmoothed inverse depth values (left) and

those smoothed with FLaME (right).

erator of G then yields

vn+1x

= proxG

(

vnx− τ

e∈Nin(v)

D∗in(e

n+1q

)

− τ∑

e∈Nout(v)

D∗out(e

n+1q

)

)

.

(25)

The final step of the Chambolle and Pock method is a

simple extragradient step applied to each vertex. The full

optimization is summarized in Algorithm 2. By express-

ing the optimization in terms of the graph DG(F∗k ), we can

trivially add and remove vertices and edges to the objec-

tive as new features are added to F∗k and triangulated. In

addition, by matching the density of vertices to the geomet-

ric complexity of the observed environment, each optimiza-

tion iteration is both fast to perform and quick to converge

(see Figure 5 for a comparison between smoothed and un-

smoothed meshes). The graph interpretation also provides

additional intuition into the optimization, which alternates

between operations on the vertices and edges of the graph:

smoothing updates are passed from vertices to neighboring

edges, and then from the edges back to the corresponding

vertices.

3.5. Frame­to­Frame Propagation

We propagate the Delaunay graph DG(F∗k ) so that the

optimized surface is always available in the current frame.

At each timestep, we set the vertex location vu for v ∈ Vkto the projection based on the smoothed inverse depth value

vξ, which we then update to be expressed in the current

frame as well. We also set the unsmoothed vz inverse depth

value to the projection of the underlying feature inverse

depth ξf .

With new vertex locations and potentially new features,

we retriangulate to maintain the Delaunay optimality prop-

erty and add and remove edges to reflect the new connec-

tivity in the triangulation. We also remove vertices and fea-

tures that project outside the view of the current camera, al-

though these could be saved and displayed separately. Since

the relative transform from frame-to-frame is small for high

4691

Page 7: FLaME: Fast Lightweight Mesh Estimation Using Variational ...openaccess.thecvf.com/content_ICCV_2017/papers/...Figure 2: Second Order Smoothing: FLaME estimates dense inverse depth

framerate cameras, and because the optimization is typi-

cally able to converge before the next frame is available, the

optimization is relatively unaffected by the projection step

and we benefit from the smoothed surface always available

at the current camera.

4. Evaluation

Our implementation of FLaME is written in C++ and

makes use of the Boost Graph Library [25] and the De-

launay triangulation implementation from [23, 24]. The

primary processing thread handles stereo matching and in-

verse depth filtering, performs Delaunay triangulations, up-

dates the Delaunay graph, and publishes output. A second

thread continuously performs the graph optimization steps

outlined in Algorithm 2. The third thread samples new fea-

tures every N frames (N = 6 in our experiments). For all

experiments we set the edge weight eα = 1/||viu− vj

u||2

(the reciprocal of the edge length in pixels) and eβ = 1.

We set the parameter λ that controls the balance between

unsmoothed data and the regularizer between [0.1, 0.35].

4.1. Benchmark Datasets

We quantitatively compare the FLaME reconstructions

to existing approaches and show how we are able to pro-

duce accurate, dense geometry extremely efficiently at the

current frame. We interpolate the FLaME meshes to fully

dense inverse depthmaps and measure their accuracy and

completeness against two existing CPU-based approaches:

LSD-SLAM [6] and Multi-Level Mapping (MLM) [11].

We use image and pose sequences from the TUM RGB-

D SLAM Benchmark dataset (VGA at 30 Hz) [27] and the

EuRoC MAV datasets (WVGA at 20 Hz) [3]. Pose ground

truth for both datasets was generated using a motion capture

system. Structure ground truth was approximated using an

RGB-D sensor for the TUM sequences and a 3D laser scan-

ner for the EuRoC sequences.

The pose tracking and SLAM backend modules of LSD-

SLAM and MLM were disabled in the experiments so that

all three algorithms used the motion capture poses and all

performance differences can be attributed to the different

depth estimation techniques.

All metrics were captured on a desktop computer with an

Intel Core i7-4820K 3.7 GHz CPU. We use three sequences

from the TUM dataset (long office household, struc-

ture texture far, and nostructure texture near withloop)

and the 6 sequences from the EuRoC dataset with structure

ground truth (V1 01, . . . , V2 03).

We report two main measures for depthmap accuracy

and completeness: the relative inverse depth error and the

density of accurate depth estimates. The relative inverse

depth error is the error in inverse depth relative to ground

truth, averaged over all pixels and depthmaps. The density

of accurate depth estimates is the fraction of inverse depth

estimates that are within 10 percent of ground truth for each

Performance on Benchmark Datasets

DM RE [%] AD [%] Cores Time [ms] FPS [Hz]

TU

M

LSD 181 18 19 2.5 16 62

MLM 103 12 32 2.1 17 57

L=3 4950 8.5 54 2.0 16 61

L=4 4950 6.8 54 1.7 7.3 136

L=5 4950 6.6 51 1.4 4.2 236

EuR

oC

LSD 874 18 17 1.6 16 61

MLM 734 17 25 1.0 14 69

L=3 12595 12 36 1.3 13 78

L=4 12595 11 37 1.2 7.0 143

L=5 12595 10 33 0.8 4.3 230

Table 1: We evaluate FLaME with various settings of detail pa-

rameter L (see Section 3.1) on two benchmark datasets [27, 3].

FLaME produces depthmaps (DM) with both lower relative in-

verse depth error (RE) and a higher density of accurate inverse

depths (AD), while taking less processing time per frame (Time)

and using less CPU load (Cores), than state-of-the-art approaches

LSD-SLAM [6] and MLM [11]. Refer to Section 4.1 for a more

detailed description of the metrics and experimental setup.

Accurate Inverse Depth Density [%]

Sequence LSD MLM L=3 L=4 L=5T

UM

fr3 loh 18.9 30.4 48.1 47.9 44.1

fr3 nstn 16.5 30.6 52.5 53.7 51.9

fr3 stf 26.8 47.8 72.6 72.0 69.1

Eu

Ro

C

V1 01 18.2 26.4 36.6 37.8 34.4

V1 02 18.3 27.9 39.2 40.5 37.8

V1 03 11.0 17.0 26.4 27.0 24.7

V2 01 25.9 39.3 48.1 46.9 41.6

V2 02 20.5 28.9 37.2 37.2 32.2

V2 03 11.5 19.0 27.8 28.9 26.7

Table 2: Here we present the fraction of inverse depths per

depthmap that are within 10 percent of groundtruth for each bench-

mark video sequence [27, 3] for LSD-SLAM [6], MLM [11], and

FLaME with different settings of parameter L. FLaME outper-

forms the competing approaches, with L = 4 providing a nice bal-

ance between the number of vertices per depthmap and the amount

of smoothing performed.

depthmap, averaged over all depthmaps. We also report

both runtime per frame and CPU load over the datasets.

The results are summarized in Table 1 and Table 2 and as

can be seen, FLaME produces dense geometry more accu-

rately and efficiently than the competing approaches. On

the EuRoC sequences FLaME with L = 5 produces re-

constructions at up to 230 Hz using less than one Intel i7

CPU core and achieves the lowest relative inverse depth er-

ror across the different systems. Although finer settings of

L = 3 and L = 4 produce slightly better density metrics, as

expected, they fair slightly worse in terms of relative inverse

depth error compared to L = 5. We believe the primary rea-

son for this unintuitive result is that the graph optimization

takes longer to converge for these parameter settings given

the greater number of vertices and edges. Since the mesh

4692

Page 8: FLaME: Fast Lightweight Mesh Estimation Using Variational ...openaccess.thecvf.com/content_ICCV_2017/papers/...Figure 2: Second Order Smoothing: FLaME estimates dense inverse depth

Figure 6: Flight Experiments: FLaME can be used to enable online perception for autonomous navigation. We conducted indoor and

outdoor flight experiments running FLaME onboard a small micro-aerial vehicle (MAV) (bottom left) with a forward-facing camera flying

at up to 3.5 meters-per-second. The image on the top left shows the collision-free trajectory (pink) that is generated to navigate around a

pillar obstacle. The images to the right show the inverse depth meshes as the vehicle approaches the obstacle field.

takes longer to converge, the camera moves before the mesh

can settle, resulting in higher inverse depth error. Initial

depthmap convergence is very fast, however, usually within

the first second of operation. In addition, FLaME produces

accurate meshes at the current frame, while the competing

approaches (which are both keyframe based) produce re-

constructions far more infrequently, which is particularly

dangerous for mobile robot navigation.

We also experimented with corrupting the ground truth

positions with additive Gaussian noise to characterize the

effect of pose error. With no artificial noise, the density of

accurate inverse depths for the TUM fr3 stf sequence is

71 percent. However, with translation noise with a standard

deviation of 1 cm, this density drops to 30 percent, which

demonstrates the importance of accurate pose information

on the depth estimation process. Addressing this limitation

is one direction that we would like to pursue for future work.

4.2. Flight Experiments

We also provide results from experiments with FLaME

running completely onboard, in-the-loop on a small au-

tonomous quadrotor flying at up to 3.5 meters-per-second.

Timing and Load on Autonomous MAV

Metric Indoor Outdoor

Vehicle Speed [m/s] 2.5 3.5

Depthmaps 803 1046

CPU Load [cores] 1.6 1.7

Runtime [ms] 9.4 11

Peak FPS [Hz] 106 91

Table 3: FLaME is efficient enough to allow for real-time per-

ception onboard small, computationally constrained micro-aerial

vehicles (MAVs). We flew our quadrotor fully autonomously in

both indoor and outdoor environments with no prior information

and used geometry from FLaME to plan around obstacles online.

The quadrotor (see Figure 6) weighed 3 kg and was

equipped with a Point Grey Flea 3 camera running at

320 × 256 image resolution at 60 Hz, an inertial measure-

ment unit (IMU), and a laser altimeter. The pose of the

camera was provided by an external visual-inertial odome-

try pipeline [26]. Collision-free motion plans were gener-

ated using A* on a 2D occupancy grid updated using slices

from the FLaME meshes. All computation was performed

onboard an Intel Skull Canyon NUC flight computer, with

no prior information provided to the robot.

We flew the vehicle through an indoor warehouse envi-

ronment and an outdoor forest with obstacles that the vehi-

cle had to plan around using perception from FLaME. Run-

time and load metrics on the flight computer are summa-

rized in Table 3. Even on the flight computer, FLaME was

still able to produce dense reconstructions at over 90 Hz (the

detail parameter was set to L = 3), with sufficient accuracy

to plan around obstacles.

5. Conclusion

We presented a novel dense monocular depth estimation

algorithm capable of reconstructing geometric meshes on

computationally constrained platforms. FLaME reformu-

lates the reconstruction problem as a variational smoothing

problem over a time-varying Delaunay graph, which allows

both for efficient, incremental smoothing of noisy depth es-

timates and low-latency mesh estimation.

Acknowledgments

This material is based upon work supported by DARPA

under Contract No. HR0011-15-C-0110 and by the NSF

Graduate Research Fellowship under Grant No. 1122374.

We also thank John Ware, John Carter, and the rest of the

MIT-Draper FLA Team for continued development and test-

ing support.

4693

Page 9: FLaME: Fast Lightweight Mesh Estimation Using Variational ...openaccess.thecvf.com/content_ICCV_2017/papers/...Figure 2: Second Order Smoothing: FLaME estimates dense inverse depth

References

[1] H. Alvarez, L. M. Paz, J. Sturm, and D. Cremers. Collision

avoidance for quadrotors with a monocular camera. In Ex-

perimental Robotics, pages 195–209. Springer, 2016.

[2] K. Bredies, K. Kunisch, and T. Pock. Total generalized vari-

ation. Journal on Imaging Sciences, 2010.

[3] M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder,

S. Omari, M. W. Achtelik, and R. Siegwart. The EuRoC

micro aerial vehicle datasets. 2016.

[4] A. Chambolle and T. Pock. A first-order primal-dual al-

gorithm for convex problems with applications to imaging.

Journal of Mathematical Imaging and Vision, 2011.

[5] S. Daftry, S. Zeng, A. Khan, D. Dey, N. Melik-Barkhudarov,

J. A. Bagnell, and M. Hebert. Robust monocular

flight in cluttered outdoor environments. arXiv preprint

arXiv:1604.04779, 2016.

[6] J. Engel, T. Schops, and D. Cremers. LSD-SLAM: Large-

scale direct monocular slam. Proc. ECCV, 2014.

[7] J. Engel, J. Sturm, and D. Cremers. Camera-based navigation

of a low-cost quadrocopter. In Proc. IROS, 2012.

[8] J. Engel, J. Sturm, and D. Cremers. Semi-dense visual odom-

etry for a monocular camera. In Proc. ICCV, 2013.

[9] C. Forster, L. Carlone, F. Dellaert, and D. Scaramuzza.

Imu preintegration on manifold for efficient visual-inertial

maximum-a-posteriori estimation. In RSS, 2015.

[10] C. Forster, M. Pizzoli, and D. Scaramuzza. SVO: Fast semi-

direct monocular visual odometry. In Proc. ICRA, 2014.

[11] W. N. Greene, K. Ok, P. Lommel, and N. Roy. Multi-Level

Mapping: Real-time Dense Monocular SLAM. In Proc.

ICRA, 2016.

[12] G. Klein and D. Murray. Parallel tracking and mapping for

small AR workspaces. In Proc. ISMAR, 2007.

[13] A. I. Mourikis and S. I. Roumeliotis. A multi-state constraint

kalman filter for vision-aided inertial navigation. In Proc.

ICRA, 2007.

[14] R. Mur-Artal, J. Montiel, and J. D. Tardos. Orb-slam: a

versatile and accurate monocular slam system. Trans. on

Robotics, 2015.

[15] R. A. Newcombe, S. J. Lovegrove, and A. J. Davison.

DTAM: Dense tracking and mapping in real-time. In Proc.

ICCV, 2011.

[16] P. Ondruska, P. Kohli, and S. Izadi. MobileFusion: Real-

time volumetric surface reconstruction and dense tracking on

mobile phones. In Trans. on Visualization and Computer

Graphics, 2015.

[17] N. Parikh, S. P. Boyd, et al. Proximal algorithms. Founda-

tions and Trends in Optimization, 2014.

[18] S. Pillai, S. Ramalingam, and J. J. Leonard. High-

Performance and Tunable Stereo Reconstruction. In Proc.

ICRA, 2016.

[19] P. Pinies, L. M. Paz, and P. Newman. Dense mono recon-

struction: Living with the pain of the plain plane. In Proc.

ICRA, 2015.

[20] R. Ranftl, K. Bredies, and T. Pock. Non-local total gener-

alized variation for optical flow estimation. In Proc. ECCV.

Springer International Publishing, 2014.

[21] T. Schops, J. Engel, and D. Cremers. Semi-dense visual

odometry for ar on a smartphone. In Proc. ISMAR. IEEE,

2014.

[22] T. Schops, T. Sattler, C. Hane, and M. Pollefeys. 3d modeling

on the go: Interactive 3d reconstruction of large-scale scenes

on mobile devices. In 3D Vision (3DV), 2015.

[23] J. R. Shewchuk. Triangle: Engineering a 2D Quality Mesh

Generator and Delaunay Triangulator. In Applied Computa-

tional Geometry: Towards Geometric Engineering, 1996.

[24] J. R. Shewchuk. Delaunay refinement algorithms for trian-

gular mesh generation. Computational Geometry, 2002.

[25] J. G. Siek, L.-Q. Lee, and A. Lumsdaine. The Boost Graph

Library: User Guide and Reference Manual, Portable Doc-

uments. Pearson Education, 2001.

[26] T. J. Steiner, R. D. Truax, and K. Frey. A vision-aided inertial

navigation system for agile high-speed flight in unmapped

environments. In Proce. IEEE Aerospace Conference, 2017.

[27] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cre-

mers. A benchmark for the evaluation of RGB-D slam sys-

tems. In Proc. IROS, 2012.

[28] A. Wendel, M. Maurer, G. Graber, T. Pock, and H. Bischof.

Dense reconstruction on-the-fly. In Proc. CVPR, 2012.

4694