Efficient Domain Adaptation for Abstractive Summarization

Abstractive text summarization aims at distilling the essential information from a text to produce a shorter version. Recent abstractive summarization methods are mainly deep learning-based models, which rely on a large amount of data and computational resources. Such data and resources are not always available. In addition, gathering data for new domains is rather expensive and time-consuming. The goal of this project is to explore and evaluate various domain adaptation approaches for abstractive summarization [1]. We also target finding optimal subnetworks in pre-trained language models for text summarization [2].