Out-of-CoreCompression for Gigantic Polygon Meshes
Martin Isenburg Stefan Gumhold
University of North Carolina WSI/GRIS
at Chapel Hill Universität Tübingen
Gigantic Meshes
3D scans CAD models isosurfaces
Gigantic Meshes
3D scans CAD models isosurfaces
Gigantic Meshes
3D scans CAD models isosurfaces
St. Matthew Statue
slows down:• transmission• storage• loading• processing
over 6 Gigabyte
186 million vertices372 million triangles
cut into 12
pieces
Mesh Compression
• many efficient schemes, but ...– mesh has to fit into memory– 2/4 GB limit on common PCs
“Geometry Compression” [Deering ‘95]
• active research area since ‘95
“Compressing Large Polygonal Models” [Ho et al. ‘01]
• cut mesh into pieces, compress separately, stitch back together
Mesh De-Compression
• but decompress on desktop PC– single pass – small memory foot-print
• use 64-bit super computer
• eliminates number of schemes“Topological Surgery” [Taubin & Rossignac ‘98]
“Edgebreaker” [Rossignac ‘99]
“Face Fixer” [Isenburg & Snoeyink ‘00]
Contributions
Contributions• Compressor
– minimal access
Contributions• Compressor
– minimal access
• Out-of-Core Mesh– transparent– caching clusters
Contributions• Compressor
– minimal access
• Out-of-Core Mesh– transparent– caching clusters
• Compact Format– small footprint– streaming
Overview
• Related Work
• Out-of-Core Mesh
• Compression Scheme
• Processing Sequences
• Conclusion
• Current Work
Related Work
• Main Out-of-Core Techniques 1. Mesh Cutting 2. Batch Processing 3. Online Processing
Related Work• Large Mesh Processing
– Simplification– Visualization ( Multi-Resolution)– Compression
1. Mesh Cutting• cut mesh into smaller pieces• process each piece separately• give special treatment to cuts• stitch result back together
Compression[Ho et al. 01]
[Hoppe 97]
[Bernadini et al. 99]SimplificationVisualization
• cut / compress pieces / stitch
Ho et al.“Compressing Large Polygonal Models” [Ho et al. ‘01]
• artificial discontinuities– special treatment for cuts– compression rates 25 % lower
• multi-pass decompression– each separately, then stitch – decompression 100 times slower
2. Batch Processing• work on “polygon soup”• process mesh in increments of
single triangles• make no use of connectivity
Simplification[Lindstrom 00]
[Lindstrom 03] Visualization+ external sorting
3. Online Processing• reorganize mesh data into an
external memory structure• allow “random” mesh access• full connectivity available
[this paper] Compression
[Cignoni et al. 03] Simplification
Cignoni et al.
• motivation: enable use of high quality simplification– use edge-collapse– simplify in one piece– just like “in-core”
figure courtesy of Paolo Cignoni
“Octree-based External Memory Mesh” [Cignoni et al. ‘03]
• clusters do not storeexplicit connectivity
Out-of-Core Mesh
struct HalfEdge { Index origin; Index inv; Index next;};
• half-edge based
Out-of-Core Mesh (1)
next
invorigin
• spatial clustering– stored on disk– cached clusters
Out-of-Core Mesh (2)
struct IndexPair {int ci : 15;int li : 17;
};
• cluster index + local index– hide clustering– try to fit in 32 bits
mesh.getNext(edge);mesh.getInv(edge);mesh.getOrigin(edge);mesh.isBorder(edge);mesh.isEncoded(edge);
mesh.setIsEncoded(edge);
mesh.getPosition(vertex);mesh.isManifold(vertex);
"isEncoded"
• functionality– read only– except for
flag per edge
Constructiongeometry passes compute bounding box create clustering sort vertices into clusters
connectivity passes sort edges into clusters match inverse half-edges link border edges mark non-manifold vertices
clusteringinput
computingconnectivity
Clustering Input
9
2 6
54
9
1 6 8 6
7 7 7
1 8 9 7
3
8 7 7
3 7
7
9
6
7 7 7 27
48777
8 8 9 7 7
26 589
compute bounding box
create clustering– counter grid– nearest
neighbors– graph
partitioning
sort vertices into clusters
Clustering Input
9
2 6
54
9
1 6 8 6
7 7 7
1 8 9 7
3
8 7 7
3 7
7
9
6
7 7 7 27
48777
8 8 9 7 7
26 589
compute bounding box
create clustering– counter grid– nearest
neighbors– graph
partitioning
sort vertices into clusters
Clustering Input
compute bounding box
create clustering– counter grid– nearest
neighbors– graph
partitioning
sort vertices into clusters
Computing Connectivity
sort edges into clusters
match inverse half-edges– border edges– non-manifold
edges are cut
link borders non-manifold
verticesnext
next
inv
inv
Example Timings
build 7 hours35 min19 min
4 hours14 min49 mincompression
cache misses 2.11.311.0
96 MBin-core limit 192 MB 384 MB
871 MBon-disk size 1.7 GB 11.2 GB
128 MB
2.1
5 min
Compression Scheme
Popular Schemes“Topological Surgery” [Taubin & Rossignac ‘98]
“Triangle Mesh Compression” [Touma & Gotsman ‘98]
“Cut-Border Machine” [Gumhold & Strasser ‘98]
“Edgebreaker” [Rossignac ‘99]
“Face Fixer” [Isenburg & Snoeyink ‘00]
“Angle Analyzer” [Lee, Alliez & Desbrun ‘02]
“Dual Graph Approach” [Li & Kuo ‘98]
“Spectral Compression” [Karni & Gotsman ‘00]
“Triangle Mesh Compression” [Touma & Gotsman ‘98]
Triangle Mesh Compression
Data Structure
prev next
across
origin edge
already encoded
not yet encoded
struct BoundaryEdge { BoundaryEdge* prev; BoundaryEdge* next; Index edge; Index origin_idx; int origin[3]; int across[3]; int slots; bool border;};
Holes
Non-Manifold Vertices
Parallelogram Prediction
Quantization• precision of 32-bit float?• largest exponent least precise
- 4.095 190.974
0 1286432
x - axis
1684
23 bit ofprecision
23 bit ofprecision
23 bit ofprecision
Samples per Millimeter16 bit 18 bit 20 bit 22 bit 24 bit
97 388 15532.7 m
327 1311 20 cm
5 22 86195 m
Results
508 MBoriginal 1 GB 6.7 GB
compressed 344 MB61 MB37 MB
174 sec27 sec15 secload-time
foot-print 9.4 MB2.8 MB1.5 MB
double eagle
power plant
Results
1.8 GB
285 MB
original
28 MB
180 MB
compressed
63 sec
9 sec
load-time
1 MB
1 MB
foot-print
Processing Sequences
• polygon soup– efficient processing– no connectivity
• external memory mesh– seamless connectivity– slow to build & use
Mesh Formats• indexed meshes
– bad for large data sets2 GB
4 GB
Processing Sequences
seamless connectivity during fixed traversal
Large Mesh Simplification
output boundary
input boundary
bufferedregion
processed region
unprocessed region
“Large Mesh Simplification using Processing Sequences”[Isenburg, Lindstrom, Gumhold, Snoeyink Visualization ‘03]
original method
using processing sequences
Conclusion
Summary• Compressor
– minimal access
• Compact Format– small footprint– streaming
• Out-of-Core Mesh– transparent– caching clusters
Issues• length of compression boundary
– possible – but we use only local heuristics– may require too many clusters– causes thrashing
O( n ) [Bar-Yehuda & Gotsman ‘96]
• external memory mesh– expensive to build & use– avoid using it ?
Current Work
Small Footprint Streaming• vertices & triangles interleaved
streaming
small memory footprint
• concept: “Streaming Meshes”
• knowledge when vertex is nolonger needed
Streaming Meshes• interleave vertices & triangles• “finalize” vertices
v 1.32 0.12 0.23v 1.43 0.23 0.92v 0.91 0.15 0.62f 1 2 3done 2 v 0.72 0.34 0.35f 4 1 3done 1 ⋮ ⋮ ⋮ ⋮
verticesfinalized
(not used bysubsequenttriangles)
Streaming Compression
applicationindexedmesh
out-of-coremesh
compressedstreaming
mesh
streamingmesh
compressedstreaming
mesh
some kindof
processing
compressedstreaming
mesh
Thank You.
gigantic thanks to
Stanford’s Digital Michelangelo projectWalkthru group at UNC
Newport News ShipbuildingLLNL
Top Related