Perceptual Graphics, Capture and Massively Parallel Computing
While the performance of graphics hardware continues to increase tremendously, there will always be a huge gap between the requirements for physically accurate global illumination and the amount of computations that can go into a single image for real-time rendering. The idea behind perceptual graphics is to focus on human perception in order to identify short cuts that can, for example, be taken in rendering to create plausible images instead of physically accurate ones. While some approaches for this are based on simple algorithms, others include machine learning for effectively approximating complex, unknown functions during rendering.
To be able to render scenes that look as close to real as possible, we also need to be able to capture, represent and store the appearance of real-world objects. This includes both the geometry as well as other material properties.
Finally, both machine learning as well as other, more traditional, approaches used in reconstructing objects or materials require very large amounts of computational resources. In order to be able to produce results in an acceptable timeframe, massively parallel systems, such a multi-GPU server, need to be used or programmed.
In the area of rendering, we look into approaches that increase the perceptual accuracy of real-time image formation, including motion blur, defocus blur and transparent objects. In addition, we focus on the perception of rendered images in general by observing the covert part of the human visual system using EEGs as well as looking into more specific issues, such as the perception of noise. An application for the perception of noise in combination with contrast masking is the perceptual quality driven adaptive sampling for increasing the perceived convergence of physically accurate global illumination.
For capturing of real-world objects, we are interested in reconstructing thin structures and transparent objects such as glasses. In addition, we would like to push the boundary on how accurate objects can be reconstructed including the physical scale in order to allow industrial applications to use capturing approaches for quality control where absolute measures are required.
During our research in these graphics-based areas, often complex algorithms are required that process huge amounts of data. While approaches for most of these algorithms exist, some still require efficient massively parallel implementations while others have not been solved in parallel at all. A more recent example of this area includes a massively parallel approach for solving large, dense linear assignment problems.
In additions to offering elective lectures (integrated events) on the topics of capturing real-world objects and massively parallel computing. We also offer labs and thesis topics in all of our research areas. Please contact Dr. Stefan Guthe for further details on currently available topics.