The group’s research foci are the analysis, classification and visualization of spatial measurement data. We use machine learning methods such as deep learning for the fully automated interpretation of 2D and 3D measurement data. This involves training artificial neural networks (ANNs) to recognize and pinpoint objects – for instance from urban infrastructure – in comprehensive mobile measurement system data sets.
Our interactive applications for navigating in measurement data visually support complex analysis and decision-making processes. Depending on the use case, we can also develop visualization variations on appropriate platforms which display the measurement results in real time. This enables recording processes to be readjusted interactively. For real-time visualization on mobile devices with limited processing power, we resort to dedicated visualization components. For AI-based object recognition, we create synthetic training data. In doing so, we are also laying the foundation for the iterative optimization of measurement systems with a view to prospective machine data interpretation.
- Display of massive point clouds
- Real-time rendering with more than 20 frames per second
- Various visualization techniques for intuitive presentation of complex information, e.g. calculation of lighting, color-coding of edges and false-color representation
Synthetic training data
- Creation of 3D scenes including material properties, lighting conditions, weather phenomena and dynamic properties
- Algorithmic generation of 3D models from parameterizable components
- Creation of simulated measurement data: photo-realistic images and 3D point clouds
Automated data interpretation
- Fully automated interpretation of 2D and 3D measurement data, e.g. by means of deep learning
- Implementation of cloud-based solutions for data processing
- Compilation of comprehensive training datasets for the automated training of algorithms