MQP: Mesh Algorithms
Authors: Jason Reposa and Tim Garthwaite
Advisor: Professor Matt Ward
Previous Work
Here are some brief synopses of work we have reviewed.
M. Gopi and D. Manocha
This article overviews the work done by M. Gopi and D. Manocha, in the Department of Computer Science at the University of North Carolina at Chapel Hill, in model simplification. They first created a method of simplifying B-spline models, and then generalized the use of their algorithm by creating a multi-pass method for simplifying any 3D model.
Gopi and Manocha's program runs in three passes. The first pass creates an intermediate C-Model representation from a polygonal mesh, triangular mesh, or a tensor product patch model. This C-Model representation is a collection of triangular spline patches. The second pass merges edges of the patches and then performs further simplification using pattern matching of patch adjacencies from a lookup table of known simplifications that are safe to perform. The final pass performs tesselations to revert the simplified model to its original form. The approach is assumed to be very flexible and fairly safe, but not overly efficient with time or space.
Gopi and Manocha's implementation was written in C++ using arrays. Theirs is a work in progress, citing work that will influence the next iteration of development.
Peter Lindstrom and Greg Turk
This article describes the work done by Peter Lindsrom at the Georgia Institute of Technology on polygonal model simplification. Their algorithm calculates error metrics for all neighboring pairs of verticies (edges) and collapses the edge with the lowest error into a single vertex. This vertex is placed in a position that keeps the total volume of the model constant and fits into a number of other error-reducing optimizations to reduce mean geometric error. What makes their work unique in the field is that their algorithm, while remaining very accurate compared to peer work, makes greedy decisions for each edge collapse and vertex placement, and does not keep track of any information about what the model used to look like, making it extremely fast and memory efficient.
Edge collapse is argued to be more attractive than vertex removal since it can preserve geometry of the model better and does not require the use of triangulation algorithms since it does not create holes in the model. I argue that without further look at the resulting triangles at each collapse, some verticies will be ordered correctly, generating model which cannot be backface-culled (and thus not as efficient as they could be with a more careful approach.
This paper is the first I have found that shows a specific error quantification method. We were introduced to a proram called Metro that estimates mean geometric error among levels of detail for 3D models - a useful comparison tool.
William J. Schroeder, Jonathan A. Zarge and William E. Lorensen
This paper is a technical report on the decimation algorithm. The paper discusses the evaluation and implementation of the algrotihm. It also defines the goal of decimation: "to reduce the total number of triangles in a triangle mesh, while preserving as accurately as possible important features.
This paper outlines three steps in the decimation process
- characterize the local vertex geometry and topology
- evaluate the decimation criteria
- triangulate the resulting hole
The paper characterizes the local geometry and topology in five different categories vertices, simple, complex, boundary, interior edge, and corner.
The paper simplifies the issues of vertex removal and vertex selection. Particularly, it shows that finding the vertex to remove is an iterative process and that selection is based on the assertion of the distance to the plane is greater than a given threshold, named 'd' in the figure.
The last step in the decimation process is the triangulation of the hole that is left by the vertex removal. This paper suggests using a split-plane method. The method is a recursive process that finds the best candidate vertex and splits the hole by a plane orthogonal to the average plane that joins the starting vertex and best candidate. It repeats with another split on each side of the previous split. The recursion is defined to stop when there are only three vertices remaining on a split, since three vertices create a triangle. This method guarantees when successful it will produce a triangulation where no triangles overlap, and the hole is completely filled.
Our program uses a subset of the decimation algorithm defined in this paper. Specifically it uses the distance to the plane formula and only deals with one category of vertices, simple. Our program avoids any special cases for edges or corners, since we assumed that all points would be attached, which also implies that no edge would have only one triangle, and more than one triangle would share a point.
Michael Garland and Paul S. Heckbert, Carnegie Mellon University
This paper presents the most robust algorithm used for model decimation. The algorithm uses the quadric error at each vertex, which could be thought of as representing how creased a surface is at the vertex. The algorithm also takes into account the color at each vertex or the texture used with the model. The algorithm does not necessarily, however, generate manifold surfaces, and is fairly expensive in memory usage and speed.
The quadric error at each vertex is found by summing the squares of the distances between the vertex and the planes that make up the triangles surrounding the vertex. The algorithm performs decimation by joining pairs of vertices into one vertex, choosing the position of the new vertex between the two points that minimizes the quadric error. First, a list of pairs is generated by selecting pairs of vertices which are said to be valid for contraction. A pair is valid if the vertices in the pair represent an edge or are sufficiently close together (given a threshold parameter). The cost of contracting each pair is simply the sum of the quadratic error at each point. The algorithm iteratively contracts the pair with the smallest cost, replacing occurrences of each vertex from the contracted pair with the new vertex in every pair in the list of pairs. The algorithm was extended to incorporate major differences in vertex color between vertices or texture color near the vertices into the cost calculated for each pair.