T RIANGLE M ESHES Part 1: Constrained Triangulations Part 2: Delaunay Refinement Part 3: Local...
Embed Size (px)
Transcript of T RIANGLE M ESHES Part 1: Constrained Triangulations Part 2: Delaunay Refinement Part 3: Local...
•Part 1: Constrained Triangulations
•Part 2: Delaunay Refinement
•Part 3: Local Feature Size
BY ILYA BITLER
• What is a constrained triangulation?Similar to the Delaunay Triangulation problem we have seen, except now we have a constraining set of line segments that cannot be changed and we want to find a triangulation with these limitations. Formally:
Given a set of points S and a finite set of line segments L (a line in L connects two points in S), when any two lines are disjoint or share one common end-point. We want to find a constrained triangulation – A triangulation that contains all lines in L as edges.
• How can we find one?A plane-sweep algorithm once again.
• A vertical line sweeping the plane from left to right.• Two data structures: X – the schedule (events in time), Y – stores the line segments in L that intersect the sweep line .• Invariant: A partial triangulation contains edges of L, maximal number of edges connecting points to the left of the line and no other edges.• So how does it really work?• Note that two left endpoints of constraining line segments along the line have a convex chain in the partial triangulation.• How edges are added?• when a point p is encountered by the sweep-line it is connected to the rightmost vertex of the corresponding convex chain.• We then go over that chain in clockwise and anti-clockwise direction and add the rest of the edges (note that sometimes it is impossible to add edges to some vertices without destroying the triangulation).• The above definition is a bit unclear, we will look at the 3 main scenarios of encountering a point by the sweep line.
• 3 “scenarios”:
Point p is the left endpoint of a constraining line, its addition would later on lead to a creation of two chains (when getting to t1 and t2).
Point p is the right endpoint of a constraining line, its addition would later on lead to a merger of two chains (when getting to another vertex, like t in the figure).
Point p is in an interval along the sweep line, so edges are added only in the corresponding chain of that interval.
• Complexity: construction of schedule X = sorting the points in S, done in O(nlogn). Cross-section Y is maintained as a dictionary, i.e. it support search, insertion and deletion in O (logn). We search for each point in S, and insert and delete all segments in L, since all those operations are O (logn) and less than 3n edges are added to the triangulation, the total running time is O (nlogn).
• Note that the triangulations constructed by the aforementioned algorithm are not Delaunay triangulations (triangles may contain small/large angles). Thus it is not the optimal triangulation possible and we move on to Constrained Delaunay triangulations.
Constrained Delaunay triangulations
• The idea is to eventually show that we can construct a Constrained Delaunay triangulation (CDT) using the edge-flipping algorithm, with a few limitations (will be mentioned later on). Before that we will see a few definitions and prove a few claims:
• Let us define a relation of visibility between points:We say that points x,y R∈ 2 are visible from each other if: and for all uv in L.
• We define edge “belonging” to CDT:An edge ab belongs to CDT if: (i) ab L or ∈ (ii) a and b are visible from each other, and you can draw a circle through a and b so that every point in this circle is invisible from every point in x ab.∈
• We also define a witness of membership of ab in CDT as the circle described in (ii) above.
Constrained Delaunay triangulations
• Let us now define a more generalized notion of being locally Delaunay:Let S and L be the points and constraining line segments as before, and K any triangulation of S and L. We say that an edge ab K is locally Delaunay if:∈
1. ab L.∈2. It is a convex hull edge.3. For two triangles abc, abd K, d lies outside the circumcircle of abc.∈
This definition leads us to our main claim: Constrained Delaunay Lemma.
Constrained Delaunay triangulations
• Constrained Delaunay Lemma: For a given triangulation K, if every edge in K is locally Delaunay then K is the CDT of S and L. • Proof: Going to show that every edge in K satisfies the conditions of “belonging” to CDT as defined previously. Let ab K and p K. ∈ ∈• if ab L, it belongs to CDT by definition.∈• if ab is convex hull edge, we can find a witness – i.e. a circle passing through ab such that p lies outside the circle, and thus it belongs to CDT.
• the third case is ab belongs to two triangles. Let abc be the triangle separated from p by the line passing through ab.We will prove that if p is visible from any point x ab, then it lies outside the ∈circumcircle of abc, and thus that certain circle is a witness:
Constrained Delaunay triangulations
• Proof Cont.: Let’s look at the edges in K crossing xp, since x and p are visible from each other all these edges are not in L (by definition of visibility). Now we can just claim p is outside the circumcircle of abc using the original proof of Deluanay Lemma (with the power of point method etc.).
• Conclusion: We can use the edge-flipping algorithm for CDT. Edges in L will not be flipped being locally Delaunay anyways. Complexity would be o(n2) as previously.
Constrained Delaunay triangulations
• MaxMin Angle Lemma holds in CDT as well.• Constrained MaxMin Angle Lemma: CDT maximizes the minimum angle among all CTs.
• Naturally a discussion of Delaunay triangulation graph, would lead us to the dual Voronoi diagram. However, as we’ll see it’s a bit more complex in the constrained case.
Extended Voronoi diagrams
• First of all, we shall look at the surface in which the diagram(s) will be built.• Define Σ0 – a sheet in R2 containing S and L. We will call it the primary sheet.• For each li L, we cut ∈ Σ0 along li and “glue” together another sheet Σi (which is also cut open along li) – those sheets are called secondary sheets.The gluing is done so that by passing li one switches from Σ0 to Σi (and vice versa).
• If m=size of L, then we have m secondary sheets, all attached to Σ0 but none intersect each other, each point in R2 is copied to each sheet.
• Note that it is not possible to implement that in R3 but only in R4 (short explanation on board).
• Before we define the Voronoi diagram itself, let us look at a new, generalized, distance definition.
Extended Voronoi diagrams
• First of all we generalize the visibility relation – for points in Σ0 the definition is the same as previously defined. For a pair of points (x0, yi) when i ≠0, yi ∈ Σi it is defined as follows:• x0, yi are visible from each other if xy crosses li, and li is the first constraining line segment crossed if we traverse xy from x to y.
• We now define the distance:
• Finally, define the extended Voronoi diagram by the new distance function, example (explained):
Extended Voronoi diagrams
• A circle that witnesses the membership of ab in CDT has its centre on the primary or on a secondary sheet. Note that the centre is closer to a and b than to any other point in S (by the distance defined above).
• Thus the Voronoi regions of a and b meet along a non-empty common portion of their boundary.• So it follows that every point on an edge of the extended Voronoi diagram is the centre of a witness circle of a corresponding edge in CDT.
• In this section:We will talk about constructing triangle meshes in the plane, however it is generalized in the “real-world” applications to 3D meshing of objects.
The problem of meshing will be demonstrated in 2D:• The goal is to decompose a polygonal region in R2 into elements (triangles in our case), it may contain “holes”, vertices and constraining edges.
• How is it done and what is outputted? Let G be our input graph, we enclose the input region in a bounding box and triangulate everything in it.
• So we get a triangulation of that box, in particular our region of interest is also triangulated. • Note that the output has to cover all the input edges and vertices, but it may also have new vertices (we will see very soon why).
• Of course, this is very general and we will now demonstrate the algorithm itself, but before that let us define a notion of Triangle Quality.
• How can we measure the quality of a triangle?First of all, we have seen that when triangulating we want to have triangles with angles that are not too small. Thus the quality may be measured by:• The smallest angle, θ.• The largest angle.• The aspect ratio (will be defined in a moment).
• Does a good smallest angle bound imply a good largest angle bound, or vise-versa?If the smallest angle is θ, the largest is at most π-2θ, while we can’t say much in the opposite direction (given a bound on the largest angle).
• Aspect ratio: if ac is the longest edge in abc, then the aspect ratio is the length of ac divided by the distance of b from ac.
• Aspect Ratio vs. smallest angle
• Consider the triangle above. We get ||b-x||=||b-a|| * sin θ, and ab is at least as long as bc, so from the triangle inequality we get:||b-a||≥||c-a||/2. And:
Thus we see that the bounds of aspect ratio and smallest angle are connected to each other. A bound on θ away from zero gives us as an above bound for the aspect ratio, and vise-versa.
Now that we have defined triangle quality we get back to Delaunay Refinement.
• Our goal is to construct K, so its smallest angle is no less than some constant, and the number of triangles is also bounded by some constant.• Sometimes we are given small angles that cannot be resolved (between two constraining line segments), to minimize the “damage” from such angles we isolate them. For now we will assume that all angles in the input are at least π/2.
• General idea:The triangulation K will include all the points in the input, however while triangulating we may add new points to resolve two possible types of problems:
1. The input edges are not covered by K.2. Certain triangles have angles that are too small.
We will now look at the ways of solving these problems by adding new points to the triangulation.
1. An uncovered edge:We look at an edge ab in the current Delaunay triangulation that has not been covered, notice that such a situation my occur only when the diameter circle of ab includes some vertices. The fix is to add the midpoint of ab to the triangulation (if needed recursively).This particular operation will be denoted as SPLIT1.We say that p encroaches upon ab if it prevents ab from being covered.Note that such a situation may occur due to “fixing” an angle which is too small (next slide).
In that particular example by adding x, p now doesn’t encroach upon ax and xb, so we can cover ab by covering ax and xb.
2. The triangle’s smallest angle is too small:We look at a triangle abc whose smallest angle is smaller than the bound we require for the triangle. We fix this by adding x, the circumcentre of abc’s circumcircle. This procedure will be denoted as SPLIT2.
After adding x we are guaranteed that abc is not going to be in K, since now there is a point in abc’s circumcircle.
• The algorithm:• First of all we put the input G in a rectangular box B. The reason for this is to bound the growing of the triangulation (in some circumstances it may grow to the outside due to addition of new points).• The size of B is three times the size of the enclosing rectangle of G, G itself is in the centre of B.• We decompose each side of B to three edges, we get 12 additional vertices, and add them to the input, initially we triangulate (we get a CDT) that input, then run the following algorithm:
• Note that circum centre x is always in B.
• Preliminary Analysis:• The analysis of the algorithm is based mostly on the points it adds as new vertices, notice that the first thing we can surely say it is finite because of the way the input is enclosed by B.• Suppose we have proven that two points are always at least 2ε apart. This means we can have a bound on how many points are added by the algorithm.• Consider w and h the measurements of B, and A the area after an extension by ε at each side. A = (w+2ε)(h+2ε).• Note that any point is surrounded by a disc of ε2π, so we get a bound on the number of points in B, n ≤ A/ ε2π.
• We analyze the algorithm more thoroughly in the final part of the lecture, including a proof of the existence of such ε.• For now we will finish up this part with a quick time complexity analysis.
• Complexity:• The most expensive action is edge-flipping.• We can no longer claim the bound that was proven for a regular Delaunay triangulation since the points are not added in a random order (thus edge cases may be generated).• We do know that the number of flips is less than thus an upper bound for the flips is O(n2).• If we assume that the cost of adding a new vertex is not too expensive (at most O(n)) then we get an upper bound of O(n2) for the running time of the algorithm.
• We will now move on to a more deep and complex analysis of the algorithm.
Local Feature Size
• In this section we will focus on the analysis of the Delaunay Refinement algorithm. In particular we shall prove an upper bound on the number of triangles generated and a lower bound on the number of triangles that must be generated.
• We define a concept of Local Feature Size:• A function from f(x) from R2 to R which is defined as the smallest radius r such that the closed disk with the centre x:(i) Contains two vertices of G.(ii) Intersects an edge of G and contains one vertex of G that is not an the endpoint of the edge.(iii) Intersects two edges of G.
• Note that f(a)≤||a-b||, for all b≠a.
Local Feature Size
• Examples for the definition of Local Feature Size
• Note that the second furthest vertex from x will always be on the circle itself.• We claim that f(x) satisfies the Lipschitz Condition:
• Proof: assume by contradiction that two points satisfy f(x) < f(y) - ||x-y||.So we get that the disk of x is contained within the disk of y (explanation on board). Thus, we can shrink the disk of y while maintaining its non-empty intersection with two disjoint vertices or edges of G (basically shrink it so it contains y and x), contradiction to the definition of f(y).
• Thought the analysis we will use two positive constants C1, C2 that maintain the following inequality:
• α is the lower bound on the angle.• We can look at two equations of C2 as a function of C1 that follow from the above inequalities:
Left: C2 = 1/√2*C1 - 1/√2 Right: C2 = 2sinα*C1 – 1• We say that the two constants exist if the slope of the left line is greater than the slope of the right one, that is 1/√2> 2sinα. Example:
Note that the intersection of the lines gives us the smallest constants that satisfy the inequalities.
• We assume that the algorithm starts with the vertices of G and generates the others in some sequence, we are going to show that when a vertex is added, its distance to already present vertices is not much smaller than its local feature size.• Let p and x be two vertices, such that x was added after p, we look at two cases:(A) x was Added by SPLIT1 then ||x-p||≥f(x)/C1.(B) x was Added by SPLIT2 then ||x-p||≥f(x)/C2.
We prove B and then A; Proof of B:In this case x is the circumcentre of the circumcircle of abc – a skinny triangle.Let θ<α at c be the smallest angle in abc. Assume that a,b are part of G or a was added after b. Let L be the length of ab. We consider three cases on how a could have become a vertex:
• Case 1: a is in G, then b is also in G and thus by definition f(a)≤L.• Case 2: a was added as the circumcentre of a circle with radius r’. That circle had to be empty before the addition of a, so r’≤L. By induction (here and in the rest of the proof we implicitly assume that A,B is true for all the vertices that were added prior to the addition of x), f(a)≤r’*C2, and f(a)≤L*C2.• Case 3: a was added as a midpoint of a segment, by induction f(a) ≤L*C1.1≤C2 ≤C1, so f(a) ≤L*C1 in all three cases. Let r be the radius of abc’s circumcircle, we get L=2rsinθ, and using Lipschitz Condition:
so we have proven B.
• Proof of A: x is the midpoint of segment ab, p encroaches upon ab, and r = ||x-a||=||x-b|| is the radius of the smallest circle passing through a and b. We consider two possible cases:• Case 1: p lies on an input edge that shares no endpoint with ab, thus f(x)≤r, by condition (iii) of LFS.• Case 2: p itself was added as a circumcentre thus triggering the splitting of ab. Let r’ be the radius of that circle, since p encroaches upon ab it is in the diameter circle, thus we get r’≤√2r. Now using Lipschitz condition and induction we get:
so we have proven A as well.
• The invariants (A) and (B) guarantee that vertices added cannot get arbitrarily close to preceding vertices. This fact also implies that they cannot get close to succeeding vertices as well.• Smallest Gap Lemma: for all a,b in K.
• Proof: If b precedes a then by both invariants: ||a-b||≥f(a)/C1 ≥f(a)/(1+C1). If precedes b then ||b-a||≥ f(b)/C1 and thus:f(a) ≤ f(b)+||a-b|| ≤ ||a-b||*(1+ C1), as needed.
• This leads us to the conclusion that since vertices cannot get arbitrarily close to each other, we can use an area argument to show that the algorithm halts after adding a finite number of vertices.• We relate the number of vertices to the integral of 1/f2(x).
Let B be the bounding box previously defined and K the triangulation:• Upper bound Lemma: The number of vertices in K is at most some constant times ∫Bdx/f2(x).
• Proof: We denote by Da, the disk of each vertex a in K, with radius ra: f(a)/(2+2C1).• By the Smallest Gap Lemma the disk are pair-wise disjoint.• Note that at least one quarter of each disk lies inside B (see one of the figures in the previous part). Thus we get:
• And indeed we have a constant times the number of vertices.
Because of Lipschitz we getf2(x)≤(f(a)+ra)2. And ra
2π is the area of Da
• Before we move on to proving the Lower Bound, we shall use two geometric results on triangles with angles no less than some α>0:
• Two edges of such triangle abc cannot be too different in length, furthermore we have a bound . Also, if we have a chain of triangles connected through shared edges the length ratio cannot exceed σt, where t is the number of triangles.
• Two edges sharing a common vertex are connected by a chain of triangles around that vertex. That chain cannot be longer than 2π/α (because that’s the amount of triangles we can fit in 2π.• from those two, we get:• Length Ratio Lemma: the length ratio between two edges sharing a common vertex is at most σ2π/α.
• We will now move on to Triangle Covering (another step towards the Lower Bound Lemma).
• What is Triangle Covering?• For a given triangle we want to “cover” it with circles, in the following way:• For each vertex we take a disk with radius c0 times the length of the shortest edge.• For the circumcentre we take a disk with radius 1-c2 times the circumradius.• Note that we can keep c0 fixed while forcing c2 close to 0 as we like (by decreasing the smallest angle). If the angles cannot be arbitrarily small, then c2 can be bounded away from zero. This is basically the following lemma:
• Triangle Cover Lemma: For each constant c0>0 there is a constant c2>0 such that the four disks cover the triangle.• Proof: Let R be the circumradiusm, ab the shortest edge.We get ||a-b||≥2R*sin(α/2), since ac is at least as long as 2R and by the geometric result in the previous slide: |ac|/|ab| ≤ 1/sin(α/2).The disk around a cover all points at distance at most c0*||a-b|| from a.Assume without loss of generality that c0 <1/2.We look at the distance between the cicrumcentre, z, and at a point y on ab, at distance c0*||a-b|| from a:
The distance on the right is the distance to some point in a’s disk and on the circumcircle of abc.Note that y has to be closer than this distance (or it wouldn’t be a triangle).
• Proof Cont.: From the previous result, we see that any point in abc not covered by the disks of a,b and c are at most that distance from z.• And so we can pick c2 as which is obviously also a positive constant.
• We finally move on to the proof of the lower bound.• First of all we say (without proof) that it is possible to give the following bounds on the local feature size of a point x:• If x is in a disk of (1-c2)*R, then its LFS f(x)>c2Rcos(α/2).• If x is in a vertex disk, let L be the shortest edge incident to a, we give the following bound on LFS of x: Lsinα, then by choosing c0 = sin(α)/2, we get f(a)≥2c0L and therefore f(x)≥f(a)-||a-x||≥c0L, for every point x inside the disk with radius c0L around L.
• We will use these bounds to show that an algorithm that constructs triangles with angles no smaller than some α, generates at least some constant times the integral of 1/f2(x) many vertices.
• Lower Bound Lemma: For a given K, a triangle mesh of G, whose angles are all larger than α, then the number of vertices is at least some constant times ∫Bdx/f2(x).
• Proof: Around each a in K, draw a disk with radius equal to sin(α)/2 times the length of the shortest incident edge. Let c0= sin(α)/2* σπ/α, using the Triangle Cover Lemma we also find c2.• For each triangle abc we draw the disk with radius 1-c2 times the circumradius around the circumcentre.• From the Triangle Covering Lemma we know that each triangle is covered with four disks, thus the entire mesh is covered by disks.
• For each disk Di, let fi be the minimum LFS of any point x in Di, using the bounds previously given that minimum is some constant fraction of the radius of Di, fi≥ri/C.
• Proof Cont.:• Since the disks cover the mesh we have:
• The number of triangles is less than twice the number of vertices (denoted n) so:
• and so we have proven the lemma.• This gives us a bound on the number of vertices and triangles generated, as we wanted.