Interactive data visualization

GPU programming for interactive 3D applications

3D data collection provides significant added value, for example in the inspection and monitoring of infrastructure. In general, the human-machine interface is still a screen combined with interactive input devices such as a mouse or keyboard. Technologies from the fields of augmented or virtual reality (AR/VR) now provide the technological basis for visualizing measurement data in 3D. This makes it possible to record the relevant measurement parameters intuitively. To simplify operation for the user, the development requires wide-ranging expertise in the specific processing of data and algorithmic visualization techniques.

To ensure user comfort, the display must be smooth and free of jerky motions. This requires image sequences of at least 20 Hertz. 3D data sets from measurement applications usually contain several hundred thousand 3D points. For every 2D image to be generated, these points first have to be transformed into the image space. Then, a complex calculation is performed for every pixel of the resulting image to determine the color values, shading and textures. This requires huge amounts of processing power. Prompted by the gaming industry, extremely high-performance GPUs (graphical processing units) with enormous processing power are now available. This new generation of GPUs enables parallel image processing on a large scale. Efficient use of a GPU forms the basis for our work in the realm of 3D data visualization. To this end, we rely on platform-independent source code.

Preparing data for real-time visualization

Initially, the raw 3D data exist as point clouds – large, unstructured collections of points without any additional information, for instance on their relationship to neighboring points or their association to surface areas. While every pixel in 2D images holds information which appears structured to the human eye, 3D point clouds contain many holes. This makes them difficult to interpret for the human eye. For this reason, our visualization software starts by meshing the data together to construct surface areas which fill the holes. It searches for relationships between individual neighboring points in the point cloud and creates surface areas stretching between points which belong together. The surfaces are usually made of triangulated squares which each in turn consist of two triangles pieced together. The edge length of the triangles determines the mesh size of the resulting grid. These triangles are then assigned a color which is representative of the surface section.

The first reduction in data volume occurs during the meshing, since the points on a surface no longer exist individually but are integrated into the triangle together with the information on size and position in the space. This is why meshed 3D geometries are so advantageous for 3D display in real time. In addition to the relatively low number of points (limited to several hundred thousand), these geometries also contain information on how the points are connected to each other and to surfaces.


Different (»on-line«) renderings of a point cloud visualized in real time. Real-time visualization requires knowledge of the many different data formats, the widely varying methods for data preparation and the advantages and disadvantages of these methods as well as the possibilities for combining them.

Digital 3D model
© Fraunhofer IPM
A digital 3D model of the scene is the starting point for creating a photo-realistic rendering (center) and a segmented image (right).
Photo-realistic rendering
© Fraunhofer IPM
Based on an image rendered in 3D. Such artificially created images can be used for demonstration purposes and can be viewed from different perspectives.
Automatisch segementiertes und klassifiziertes Bild
© Fraunhofer IPM
Based on a 3D model, a recorded scene is segmented and classified by means of appropriate software. This way, objects of interests can be intuitively identified by the viewer.