With the vizHome project – a significant challenge has been determining techniques to efficiently render the millions of points we capture by way of the LiDAR scanner. Some of the homes we scanned resulted in data sets upwards of 750 million points.
Although very powerful, modern GPUs are nowhere near being able to rendering that many points per frame and still maintain interactive frame rates.
So how many points can GPUs handle before they start to slow down?
I did some benchmarking on one of our power wall machines that has a Quadro 5000 in it with 2.5 GB of graphics memory.
The test consisted of drawing some number of random points within a 50 meter x 50 meter x 50 meter cube with the viewpoint being at 0, 0, 50 and facing straight ahead down the -z axis so that the whole cube of point was in view.
The test was done using basic point drawing (squares) with a very simple shader and no smoothing. Here are the results:
As can be seen, 60 fps is lost fairly quickly, right around 8-10 million points. Considering a LiDAR scanner can scan 11 million points in 5-6 minutes, we have a challenging task on our hands here in order to maintain high frame rates.
The additional colored marks were checks performed when we sorted the data back to front or front to back and for rendering in stereo as a sanity check. Sorting front to back increased frame rates fairly significantly in this situation; however this won’t necessary carry over to a more general case since this was a fairly worst-case situation. In the stereo trials, we saw frame rates roughly cut in half as expected (since we rendered twice as many points).
It would be interesting to see how these numbers match up against newer cards (this card is about 5 years old now).