Guiding Theme A3: Opinion and Sentiment - extrapropositional aspects of discourse

Guiding Theme A3: Opinion and Sentiment – extrapropositional aspects of discourse

Guiding A3 analyzes extra-propositional aspects of meaning: distinguishing facts from non-facts and classifying sentiment associated with an entity (A1) or an event (A2). These aspects of meaning help identifying entities or facts in a given text that the writer considers important in that he or she conveys an opinion or sentiment towards them. Analysis of modality and sentiment will serve as a guide for content extraction for summarization and directly feeds into aspect-based summarization, which aims to highlight different perspectives and opinions about specific contents, or the pros and cons of a situation. Starting from the fine-grained analysis of overtly expressed sentiment in a discourse we aim to proceed towards deeper analysis of sentiments, to learn about possible reasons or intentions that are underlying an expressed sentiment, and thus towards reasoning about and explaining sentiments. We thus aim to extend aspect-based sentiment analysis with background knowledge and reasoning capabilities. As areas of application we envisage politics or journalism, scientific texts, or argumentation.

Research results of the first Ph.D. cohort

The main focus of the first phase was to develop machine learning models for tasks that support propagation of sentiment in discourse, beyond one sentence. The tasks we address are: (i) fine-grained opinion analysis (i.e., detecting who expressed what kind of attitude toward what or who) and (ii) abstract anaphora resolution (i.e. finding a non-nominal antecedent of pronouns and noun phrases that refer to abstract objects like facts, events, actions or situations, in the preceding discourse). We propose a neural model for sentence-level fine-grained opinion analysis and address data scarcity by embedding this model in a multi-task learning framework and obtain clear performance improvements using semantic role labeling as the auxiliary task (Marasović and Frank, 2018). We assume that we can learn what is the correct antecedent for a given abstract anaphor by learning attributes of the relation that holds between the sentence with the abstract anaphor and its antecedent. We propose a siamese-LSTM mention-ranking model to learn what characterizes the mentioned relations (Marasović et al., 2017). Although the current resources for abstract anaphora resolution are really scarce, we can train our models on many instances of antecedent-anaphoric sentence pairs. Such pairs can be automatically extracted from parsed corpora by searching for constructions with embedded sentences, applying a simple transformation that replaces the embedded sentence with an abstract anaphor and using the cut-off embedded sentence as the antecedent (Marasović et al., 2017). Our model outperforms the prior work for shell noun resolution (Kolhatkar et al., 2013) and we also report first benchmark results on an abstract anaphora subset of the ARRAU corpus (Uryupina et al., 2016) which presents a greater challenge due to a mixture of nominal and pronominal anaphors and a greater range of confounders. Since our models and training data augmentation are flexible and language-independent, we can port our model to German and other domains with minimal effort.

In joint work (Zopf et al. 2018) we show that sentiment annotations can serve as useful features for the notion of ‘information importance’ in text summarization (cf. Zopf et al. 2016).

As a further contribution which provides useful information for document-level sentiment inference and for explaining sentiments, we provided the first multilingual neural network approach for modal sense disambiguation (Marasović and Frank 2016, Marasović et al. 2016) that solves the annotation bottleneck by applying cross-lingual sense projection.

Ongoing project of the 2nd Ph.D. cohort

In the currently started PhD project we extend our research towards a deeper understanding and towards explaining sentiments and attitudes (cf. Li and Hovy 2017). Building on the results of the first PhD thesis, the focus of our work is now to explain the motives that are underlying the expressed sentiments and opinions in a discourse. Being able to provide such explanations will allow us to support and improve diverse downstream tasks, such as recommendation systems, event text summarization, conversational dialogue processing, or argumentation analysis.

As a first step, we are developing a framework that allows us to identify psychological needs that can explain the sentiment towards a given target. We will then proceed and address deeper facets of sentiments, by including background knowledge from both textual and structural knowledge sources. Ultimately, we aim to develop a neural architecture that effectively uses background knowledge to learn possible explanations for sentiments and to generate explanations for explicit or implicit sentiments that we can detect in a text.

Challenges that our work will need to address are, among others: (i) finding appropriate justifications for sentiments in knowledge sources and texts, to be able to provide such explanations, (ii) exploiting and learning inference rules from graph-structured knowledge bases, to improve the interpretability of the neural model, and (iii) finally, learning to generate textual explanations, to further enhance interpretability and for evaluating our models.


  • PI: Prof. Dr. Anette Frank, Prof. Dr. Michael Strube
  • First Cohort PhD student: Ana Marasović
  • Second Cohort PhD student: Debjit Paul


  • Kolhatkar, V., Zinsmeister, H., and Hirst, G. Interpreting Anaphoric Shell Nouns using Antecedents of Cataphoric Shell Nouns as Training Data. EMNLP 2013.
  • Uryupina, O., Artstein, R., Bristot, A., Cavicchio, F., Rodríguez, K. J., and Poesio, M. ARRAU: Linguistically-Motivated Annotation of Anaphoric Descriptions. LREC 2016.
  • Li, Jiwei, and Eduard Hovy. “Reflections on sentiment/opinion analysis.” A Practical Guide to Sentiment Analysis. Springer, Cham, 41-59, 2017.
  • Zopf, M., Loza, M. E., and Fürnkranz, J.: Beyond Centrality and Structural Features: Learning Information Importance for Text Summarization. Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning (CoNLL 2016), 2016.


Paul, Debjit ; Frank, Anette (2019):
Ranking and Selecting Multi-Hop Knowledge Paths to Better Predict Human Needs.
In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), 1, [Online-Edition:],

Paul, Debjit ; Hedderich, Michael A. (2019):
Handling Noisy Labels for Robustly Learning from Self-Training Data for Low-Resource Sequence Labeling.
In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, [Online-Edition:],

Zopf, Markus ; Botschen, Teresa ; Falke, Tobias ; Heinzerling, Benjamin ; Marasovic, Ana ; Mihaylov, Todor ; P. V. S., Avinesh ; Loza Mencía, Eneldo ; Fürnkranz, Johannes ; Frank, Anette (2018):
What's Important in a Text? An Extensive Evaluation of Linguistic Annotations for Summarization.

Marasovic, Ana ; Frank, Anette (2018):
SRL4ORL: Improving Opinion Role Labelling using Multi-task Learning with Semantic Role Labeling.
In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, US, [Online-Edition:],

Marasovic, Ana ; Born, Leo ; Opitz, Juri ; Frank, Anette (2017):
A Mention-Ranking Model for Abstract Anaphora Resolution.
In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), Copenhagen, Denmark, [Online-Edition:],

Marasovic, Ana ; Frank, Anette (2016):
Multilingual Modal Sense Classification using a Convolutional Neural Network.
In: Proceedings of the 1st Workshop on Representation Learning, [Online-Edition:],

Marasovic, Ana ; Zhou, Mengfei ; Palmer, Alexis ; Frank, Anette (2016):
Modal Sense Classification At Large: Paraphrase-Driven Sense Projection, Semantically Enriched Classification Models and Cross-Genre Evaluations.
In: Linguistic Issues in Language Technology, Special issue on "Modality in Natural Language Understanding", 14, (3), [Online-Edition:],

go to TU-biblio search on ULB website