Discovery of Parallelism in Sequential Programs
The ability of compilers to automatically translate sequential programs into efficient parallel code is quite limited. Although auto-parallelization has been successfully applied in some cases such as loops that satisfy certain properties, no compiler exists yet that can effectively parallelize an arbitrarily structured program. Because a compiler does not know the precise value of pointers and array indices that are computed at runtime, it may assume parallelism-preventing data dependences in places where they would never occur in practice. As a result, parallelization becomes too conservative. With our parallelism discovery tool DiscoPoP, we aim to circumvent this problem. We abandon the idea of fully automatic parallelization and instead point the programmer to likely parallelization opportunities that we identify via dynamic dependence analysis. In this way, we consider only data dependences that actually occur. From these dynamic dependences we derive possible parallel design patterns, which we propose to the programmer.
- Zhen Li, Rohit Atre, Zia Ul Huda, Ali Jannesari, Felix Wolf: Unveiling Parallelization Opportunities in Sequential Programs. Journal of Systems and Software, 117:282–295, July 2016. [[PDF] [DOI] [BibTeX]
- Zia Ul Huda, Rohit Atre, Ali Jannesari, Felix Wolf: Automatic Parallel Pattern Detection in the Algorithm Structure Design Space. In Proc. of the 30th IEEE International Parallel and Distributed Processing Symposium (IPDPS), Chicago, USA, pages 43-52, IEEE Computer Society, May 2016. [PDF][URL] [DOI] [BibTeX]
- Zhen Li, Ali Jannesari, Felix Wolf: An Efficient Data-Dependence Profiler for Sequential and Parallel Programs. In Proc. of the 29th IEEE International Parallel and Distributed Processing Symposium (IPDPS), Hyderabad, India, pages 484-493, IEEE Computer Society, May 2015. [PDF] [URL] [DOI] [BibTeX]