To satisfy the growing demands of their users, computer applications need more computational power. However, stagnating single-core performance imposes a limit on what a program running on a single microprocessor can achieve, which is not enough for many purposes. Examples include realistic simulations of many physical or biological processes, the real-time analysis of huge volumes of data, or artificial intelligence.
On the other hand, today even laptops and smartphones are already shipped with more than one processor core. Modern supercomputers provide even up to several millions. Unfortunately, developing programs that efficiently use several cores at once is still an art mastered only by experts. To harness this enormous reservoir of computational resources for individual applications in spite of these challenges, the Laboratory for Parallel Programming devises novel methods, tools, and algorithms to exploit massive parallelism on modern hardware architectures. Currently, we conduct research in the following areas:
- Discovery of parallelism in sequential programs
- Performance modeling of parallel programs
- Scalable parallel algorithms
- Scheduling of supercomputer resources
- Deep neural networks
Our research is carried out in a number of externally funded projects:
- Enabling Performance Engineering in Hesse and Rhineland-Palatinate (DFG)
- ExtraPeak (DFG)
- Human Brain Project (EU H2020)
- Software-Factory 4.0 (LOEWE)
- TaLPas (BMBF)
In our research area performance-analysis of parallel programs, we have created a tool named Extra-P, which can generate application performance models from a small set of performance measurements. It is available for download under an open-source license.
The development of our performance-modeling tool Extra-P is carried out in the framework of the Virtual Institute – High Productivity Supercomputing, a community organization for the development and promotion of programming tools designed for high-performance computing.