next up previous contents
Next: Final Analysis Up: Analysis Previous: Computing Platform   Contents

Computational Complexity

To conclude our analysis, we look at our algorithm's computational complexity. This analysis is useful because we can use it to predict its performance with huge models (models with more than 300,000 vertices). Models of this size are common in medical imaging systems[#!gvu!#].

A noticeable feature of the graph shown in Figure [*] is that the number of vertices removed per second decreases as the model size increases. This is because the computational complexity of our algorithm is not linear. The results imply that the computational complexity is . This is proven when we examine the algorithm. The decimation process, in the most severe case of considering every vertex for removal, iterates through each vertex in the model once ( iterations). While considering it for removal and retriangulating, it could, in the worst case, iterate through the list of all vertices in the model a constant number of times (total of iterations). is ; thus, the complexity of our program is .

This is numerically verified by our results using the following process (not proven but the results are close enough to point towards validity). First, calculate the ratio of the total number of vertices of one model and a smaller model. For example, the result of this for for test007 and test006 is . Next, calculate the ratio between the time taken for the first model to decimate to a given percentage and the time taken for the second model to decimate to the same percentage. The ratio of times spent decimating to ninety percent between test007 and test006 on BLACK is . Finally, take the ratio of these two numbers to get the power of in the notation. (Note: there are some calculations that are always done outside of any inner iterations through the vertex list, so to complete analysis, one would have to find a constant time that, when subtracted from both times, always yields a final result lower than .) The ratio of these two numbers for BLACK in the example is . The numerical difference between decimating to ninety percent for test007 and test006 is When we repeat this for the rest of the computers and models we get an average value of , which, without subtracting for constant time operations or accounting for thrashing with large models on Windows, seems to concur with what we expected.


next up previous contents
Next: Final Analysis Up: Analysis Previous: Computing Platform   Contents
Tim Garthwaite 2002-02-03