next up previous contents
Next: Computational Complexity Up: Analysis Previous: Geometric Error   Contents

Computing Platform

Here we examine the effects that the computing platform used can have on the performance of our decimation program. For our purposes, the computing platform consists of the hardware in a computer and the operating system it runs.

Figure [*] compares the time taken by our program to decimate to ninety percent vertices removed with each test computer on the largest six test models. Using a logarithmic scale, the results can be easily compared even though some results are orders of magnitude apart. The hand model is the largest model; ninety-percent decimation of the model generates a model with 32,733 vertices. This procedure takes a long time to complete: DELL takes 46,419.59 seconds, BLACK takes 31,105.185 seconds, and DUAL takes 52,262.51 seconds. Unlike most of the other results in this graph, DUAL does not take the least amount of time. BLACK wins this by a large margin, even though it has the weakest CPU, a largely outdated 6th generation x86 AMD K6-2. Since both computers are using the same operating system, Linux 2.4, and DUAL has much more memory, this result is unexpected. DELL has similar hardware, but a CPU that is at least twice as powerful[#!cpubenchmarks!#].

Figure: The time taken on each test computer for ninety percent removal on the largest six test models.

Looking at all the previous test model results in this graph, we see that a Windows based operating system only has the best score once, and it's with the smallest model compared. This implies that our program works better under Linux 2.4. We can also say that our program is capable of handling medical imaging sized models, since the largest original model in our testing has 327,323 vertices. We did notice quite a strain on DELL, however, for this large model. We believe this occurred when the model filled main memory and the Microsoft Windows operating systems had to use virtual memory on the hard drives: the systems began thrashing when that information was immediately needed.

The next graph (Figure [*]) is a comparison of the number of vertices removed to the time in seconds for each of the largest six test models.

Figure: The number of vertices removed per second for each of the largest six test models.

It seems that prolonged use of our program, like those on the largest models, will make computers with low memory struggle to maintain the fastest number of vertices removed per second. Specifically, when comparing results from the test model test009, DELL decimated about twenty less vertices per second than DUAL (37.0 compared to 61.7), even though it has a much faster processor (750 MHz Pentium III vs 450 MHz Celeron). We believe this can be attributed to DELL's lower amount of memory (384 MB vs. 640 MB).

These graphs lead us to recommend using Linux 2.4 version of our program. We also recommend having as much memory as your computer system allows. Although we reduced our data structures' memory footprints as much as possible, the program can easily climb over 300MB when using medical imaging sized models. In one instance, DELL climbed over 600MB when loading the hand model; this is where the thrashing occurred.


next up previous contents
Next: Computational Complexity Up: Analysis Previous: Geometric Error   Contents
Tim Garthwaite 2002-02-03