next up previous contents
Next: Analyzing Geometric Error Up: Literature Review Previous: Literature Review   Contents

Subsections

Mesh Simplification

Decimation of Triangle Meshes

William J. Schroeder, Jonathan A. Zarge, and William E. Lorensen[#!shroeder!#]

This paper is a technical report on the decimation algorithm. The paper discusses the evaluation and implementation of the algorithm. It also defines the goal of decimation: "to reduce the total number of triangles in a triangle mesh, while preserving as accurately as possible important features."[#!shroeder!#]

.

1.0

This paper outlines three steps in the decimation process.

  1. characterize the local vertex geometry and topology
  2. evaluate the decimation criteria
  3. triangulate the resulting hole

.

2.0

The paper characterizes the local geometry and topology in five different categories of vertices: simple, complex, boundary, interior edge, and corner. Simple vertices share two triangles with each vertex it has as a neighbor (neighbors are vertices that are members of a triangle that the vertex under scrutiny is also a member), and are not interior edges or corners. Complex vertices have neighbors that they do not share two triangles with, but are not boundary vertices. Boundary vertices do not share two vertices with a neighbor because it is at the edge of a mesh. Interior edge vertices are members of two edges that form sharp angles between the triangles on either side of them. Corner vertices are members of three edges that form sharp angles between the three triangles on either side of them (see Figure [*]). Only vertices of types simple and interior edge are possible candidates for removal.

Figure: The vertex categories. Reproduced from Schroeder et al.

The paper simplifies the issues of vertex removal and vertex selection. Particularly, it shows that finding the vertex to remove is an iterative process and that selection is performed by finding the vertex with the smallest value of 'd' using a distance formula. The value 'd' of a vertex is the distance from the vertex to the best-fit average plane for the vertex's neighboring vertices. In Schroeder's and his colleagues' work, if 'd' is less than a given threshold, the vertex is removed.

Figure: The distance to the average plane. Reproduced from Schroeder et al.

The last step in the decimation process is the triangulation of the hole that is left by the vertex removal. This paper suggests using a split-plane method. The method is a recursive process that finds the neighboring vertex which is the best candidate for retriangulation (that which would produce the most uniform and least skinny triangles) and splits the hole by a plane orthogonal to the average plane that joins the starting vertex and best candidate. It repeats with another split on each side of the previous split. The recursion is defined to stop when there are only three vertices remaining on a split, since three vertices create a triangle. This method guarantees that, when successful, it will produce a triangulation where no triangles overlap, and the hole is completely filled. See Figure [*].

Figure: The split-plane algorithm. The average plane of the removed vertex's neighbors is shown, along with the split plane, shaded, that is orthogonal to it, and the split line, bold, which will become a new edge in the model. The three lines connected by two points in the average plane of the picture are the edges of the hole that is being filled in. Reproduced from Schroeder et al.

Our program implements a subset of the decimation algorithm defined in this paper. Specifically it uses the distance to the plane formula and does not detect complex vertices. Our program has a simple edge and corner detector that is not robust or optimal.

Simplifying Surfaces with Color and Texture Using Quadric Error Metrics

Michael Garland and Paul S. Heckbert, Carnegie Mellon University[#!garland!#]

This paper presents a robust algorithm used for model decimation, produced by Michael Garland and Paul Heckbert of Carnegie Mellon University in 1998. The algorithm selects vertices based on the quadric error at each vertex, which could be thought of as representing how creased a surface is at the vertex. More precisely, the quadric error is the sum of squared distances to a set of planes that are initially the faces incident to the vertex, but which may not remain incident upon it as the algorithm proceeds. The algorithm also takes into account the color at each vertex or the texture used with the model. It does not necessarily, however, generate manifold surfaces (fully closed surfaces with only simple, interior edge, and corner vertices), and is fairly expensive in memory usage and speed (see the Lindstrom and Turk paper in our literature).

The quadric error at each vertex is found by summing the squares of the distances between the vertex and the planes that make up the triangles surrounding the vertex. The algorithm performs decimation by joining pairs of vertices into one vertex, choosing the position of the new vertex between the two points that minimizes the quadric error. First, a list of pairs is generated by selecting pairs of vertices which are said to be valid for contraction. A pair is valid if the vertices in the pair represent an edge or are sufficiently close together (given a threshold parameter). The cost of contracting each pair is simply the sum of the quadratic error at each point. The algorithm iteratively contracts the pair with the smallest cost, replacing occurrences of each vertex from the contracted pair with the new vertex in every pair in the list of pairs. The algorithm was extended to incorporate major differences in vertex color between vertices or texture color near the vertices into the cost calculated for each pair.

The work done by Garland and Heckbert uses a different approach to mesh simplification than that of Schroeder et al. Specifically, rather than removing a vertex from the model outright, their method merges two vertices into one new vertex which may not be (and usually isn't) in the same location as either vertex, but is instead in a new location in space that is chosen intelligently so as to minimize total geometric error for the model overall. This cannot be done optimally - it is an optimization problem.

Mesh Optimization

Hugues Hoppe, Tony DeRose, Tom Duchamp, John McDonald, and Werner Stuetzle, University of Washington[#!hoppe93mesh!#]

This article describes a mesh simplification method that represents the problem of simplifying a mesh as an optimization problem of an energy function, resulting from research by Hugues Hoppe and his colleagues at the University of Washington in 1993. They define this energy function as an addition of three terms that help to balance two competing goals (fewer vertices and results that closely resemble the original model) while keeping changes confined to affecting the mesh only locally. The energy function is defined as:


The term is a representation in the computer's memory of the positions of the vertices in the model. The term is a representation in the computer's memory of the relationships between those vertices as edges and faces. Their program can add or remove vertices, as will be described later, to find a minimal value of the sum of these terms, which are described here.

The first term in the energy function is the distance energy . is defined as "the sum of squared distances from the points to the mesh:


This value is increased as the current representation moves further away from its original form. This means that the program must keep the original mesh in memory so that it can iteratively compare the current representation to the original one input by the user.

The second term in the energy function is the representation energy . is defined as:


where is the number of vertices in the mesh and is a user-selected threshold value that determines how much this value affects the result, and thus how important the concern for a compact representation is to the user.

The third and last term, , is the spring energy of the mesh, with a spring of rest energy zero along each edge, is defined as:


This ensures that sharp areas of the mesh are not penalized (and thus over-simplified) and helps to "guide the optimization to a desirable local minimum." The value is set by the program as a small value. The entire optimization is done iteratively with a decreased value of for each iteration, fine-tuning the result when fewer calculations will be necessary.

The problem was broken down into two nested subproblems: an inner problem of reducing the representation energy by removing or replacing vertices, and an outer problem of reducing the distance energy by choosing the best of a few courses of action selected by the inner problem. Vertices are removed, added, or replaced using one of three moves from one iteration to the next: edge collapse, which merges two vertices along an edge into one vertex, thus removing one vertex, one edge, and two triangles; edge split, which splits a vertex into two vertices, adding one vertex, one edge, and two triangles, and edge swap, which moves an edge from between two vertices to between two of their neighbors, thus re-orienting two triangles in the mesh and keeping the numbers of triangles, edges, and vertices the same. It is interesting to note that edge split and edge swap do not reduce the representation energy, but may reduce the distance energy or spring energy. For example, it may reorient long triangles in the direction of the mesh that is more smooth, a result that would relieve some spring energy, which was identified by Hugues et al as desirable. This could result in a lower total energy for the mesh.

There are a number of issues that this method does not address, as identified in Lindstrom and Turk's most recent work[#!lindstrom!#]. First, the evaluation of the energy for the current mesh representation is costly, and the need to perform the entire operation multiple times, each time with a reduced spring constant, means the entire operation cannot be done in linear time. Second, the method used to select what move to make is unintelligent: rather than guessing that smooth areas should be reduced, moves are selected purely at random and either accepted or rejected if it decreases or increases the total energy. Third, the program does not choose the optimal placement for a vertex position during the edge collapse or edge split operation; instead it places the new vertex at the position of one of the two endpoints of the edge (the equivalent of a vertex removal) or at its midpoint, choosing which placement of the three produces the lowest total energy. Finally, the method consumes more memory than necessary by keeping a copy of the original mesh in memory for comparison to the current mesh each iteration. Although logically it would seem that this would yield better results than comparing to the last iterations representation, it appears that in practice, finding good placement for a vertex that results from an edge collapse produces similar results to this mesh optimization method in far less time. The next review shows more mature methods of placing new vertices.

Progressive Meshes

Hugues Hoppe, Microsoft Research[#!hoppe96progressive!#]

With the growing expectation for realistic visual representations, highly detailed geometric models are necessary. As previous stated, highly detailed geometric models imply complex meshes that have the disadvantage of being expensive to store, transmit, and render. This expectation of realism has led Hoppe to develop a new representation for triangle meshes that minimizes these disadvantages. The Progressive Mesh (PM) representation benefits greatly over common mesh representations, such as VRML[#!vrml!#]. We define a common mesh representation as a file that contains vertex information in 3D space and a triangle list that is an index for the vertex information.

A Progressive Mesh (PM) is a mesh representation that can "progress" through many levels of detail seamlessly. To do this it uses two basic mesh transformations. A PM uses a previously defined mesh transformation called an edge collapse[#!hoppe93mesh!#]. We discussed edge collapses in the previous paper we reviewed. An edge collapse is what occurs when two neighboring vertices combine to make one vertex. An algorithm that uses the PM representation implements edge collapses to optimize a mesh. However, to make the PM representation "progressive" it implies that the algorithm needs to implement the inverse of an edge collapse. A vertex split has the exact opposite effect of an edge collapse. It has the ability to create two vertices from a single vertex. These two mesh transformations are the foundation for a PM representation. Mesh optimization occurs by using a combination of edge collapses. Mesh rebuilding occurs by using a combination of vertex splits. A PM is a mesh data representation that is capable of optimizing or rebuilding itself.

Optimizing and rebuilding a mesh is one of the major achievements over common mesh representations, since it can recreate the original model from any level of detail PM. This means that this data structure creates a lossless representation. This is a benefit in two ways. A mesh can be compressed to a extremely low level of detail, but can be rebuilt back to the original mesh with a combination of vertex splits. The second benefit is that the extremely low level detail model can be transmitted across data lines to a remote device (most likely another computer that recognizes the data as a polygonal mesh), and with a combination of vertex splits it can be rebuilt on the remote computer. This benefit is what Hoppe defines as Progressive Transmission.

The second major achievement is that the paper introduces a new simplification method that is implemented using the Progressive Mesh data representation. This new procedure simplifies based on overall appearance, "not just the geometry of the mesh surface." It is based on an earlier paper by Hoppe, "Mesh Optimization"[#!hoppe93mesh!#]. Similar to the paper "Mesh Optimization" it uses a combination of energy functions to create an energy metric that is used to measure the accuracy of a simplified mesh.

The simplification algorithm applies a combination of edge collapse transformations to the original mesh. For each edge collapse the program will write the sequence of vertex splits to a file. In addition, after it has fully optimized the mesh, it writes the base mesh to a file. Using that file it recreates the original mesh using the vertex splits that were recorded. We see this new simplification method as an example to create better algorithms that use Progressive Meshes.

The third major achievement is the ingenious use of Progressive Meshes (PM) for Geomorphing. A Geomorph is a smooth visual transition between two meshes. When a mesh is modified from a vertex split or edge collapse it can be restored to its previous state by using the inverse transformation. Each single transformation has a single corresponding "undo" transformation. Each individual "transformation can be transitioned smoothly, so can the composition of any sequence of them." Since a PM is a sequence of edge collapses or vertex splits, "any two meshes of a PM representation" can be used to make visually smooth transitions.

Fast and Memory Efficient Polygonal Simplification

Peter Lindstrom and Greg Turk, Georgia Institute of Technology, Atlanta[#!lindstrom!#]

This article describes the work done by Peter Lindstrom and Greg Turk at the Georgia Institute of Technology on polygonal model simplification in 1993. Their algorithm calculates error metrics for all neighboring pairs of vertices (edges) and collapses the edge with the lowest error into a single vertex. This vertex is placed in a position that keeps the total volume of the model constant and fits into a number of other error-reducing optimizations to reduce mean geometric error. What makes their work unique in the field is that their algorithm, while remaining very accurate compared to peer work, makes greedy decisions for each edge collapse and vertex placement, and does not keep track of any information about what the model used to look like, making it extremely fast and memory efficient.

Edge collapse is argued to be more attractive than vertex removal since it can preserve geometry of the model better and does not require the use of triangulation algorithms since it does not create holes in the model. I argue that without further look at the resulting triangles at each collapse, some vertices will be ordered correctly, generating model which cannot be backface-culled (and thus not as efficient as they could be with a more careful approach.

This paper is the only in the literature we have seen that shows a specific error quantification method for mesh simplification. We were introduced to a program called Metro that estimates mean geometric error among levels of detail for 3D models - a useful comparison tool.

A Unified Approach for Simplifying Polygonal and Spline Models

M. Gopi and D. Manocha, University of North Carolina at Chapel Hill[#!gopi!#]

This article overviews the work done by M. Gopi and D. Manocha, in the Department of Computer Science at the University of North Carolina at Chapel Hill, in model simplification. They first created a method of simplifying B-spline models, and then generalized the use of their algorithm by creating a multi-pass method for simplifying any 3D model.

Gopi and Manocha's program runs in three passes. The first pass creates an intermediate C-Model representation from a polygonal mesh, triangular mesh, or a tensor product patch model. This C-Model representation is a collection of triangular spline patches. The second pass merges edges of the patches and then performs further simplification using pattern matching of patch adjacencies from a lookup table of known simplifications that are safe to perform. The final pass performs tesselations to revert the simplified model to its original form. The approach is assumed to be very flexible and fairly safe, but not overly efficient with time or space.

Gopi and Manocha's implementation was written in C++ using arrays. Theirs is a work in progress, citing work that will influence the next iteration of development.


next up previous contents
Next: Analyzing Geometric Error Up: Literature Review Previous: Literature Review   Contents
Tim Garthwaite 2002-02-03