Parallelism in a computer program is the ability to run it faster by using more than one processor at the same time. Parallel programming is the act of writing programs that can harness such parallelism. This becomes particularly relevant when there are no other optimization options left such as using a faster processor or algorithm. Given that technical constraints, in particular power consumption, limit the speed of individual processors, parallelism is permeating the world of computers.
However, the principle of parallelism predates the age of computers by far. Already our ancestors applied parallelism whenever a problem such as hunting big animals, harvesting a field, or constructing a house exceeded the capacity of individuals. Today, teamwork is the rule rather than the exception in almost every organization. The work in a team should be balanced, and team members need to synchronize their efforts to maintain the correct order among subtasks and avoid confusion.
Parallel programmers face similar challenges when orchestrating the work of multiple processors. Unfortunately, no compiler exists that can translate an arbitrary sequential program into a correct and efficient parallel program, leaving most of the work to human experts. To make their task easier and save their time, we create methods, tools, and algorithms that support the development and deployment of parallel software systems in various stages of their life cycle. In addition, we teach our students the skills to successfully exploit parallelism in their programs.