Deep Neural Networks

Deep Neural Networks

The inherent parallelism in the training and deployment of neural networks makes them a prime candidate for parallel computing. Our research in this area targets both the optimization of neural networks and using them as a tool to understand and improve the performance of arbitrary parallel programs. Recently, we developed a tuning method for deep neural networks, targeting non-functional requirements such as inference speed, training cost, energy consumption, and network size under given accuracy constraints.

Selected Publications

  • Rahim Mammadli, Felix Wolf, Ali Jannesari: The Art of Getting Deep Neural Networks in Shape. ACM Transactions on Architecture and Code Optimization (TACO), 15(4):62:1-62:21, January 2019. [PDF] [DOI] [BibTeX]