Text Analytics: Semi-supervised Learning for Semantic Text Processing

Text Analytics: Semi-supervised Learning for Semantic Text Processing

Description

Machine Learning methods that are able to learn from both labeled and unlabeled data

are commonly subsumed under the term semi-supervised learning. Semi-supervised learning is a very important approach for many practical applications, also in text processing, like relation extraction and sentiment analysis.

  • Typically, only few labeled data are available for training, while unlabeled data are abundant.
  • Often, labeled data is given only for a particular domain, while for the domain of interest there is a lack in labeled data.

The seminar reviews the most important methods for semi-supervised learning in the field of semantic text processing. Topics include bootstrapping methods, self-learning, distant supervision, semi-supervised methods in Deep Learning, and their usage in information extraction tasks (e.g. relation extraction) and sentiment analysis.

Expectation

Each student is expected to

  • attend the seminar sessions and actively contribute to the discussion in the seminar
  • prepare a presentation on a topic relevant for the seminar
  • present this presentation and be able to answer questions from the audience
  • prepare a term paper on the topic

Organization

  • Seminar sessions are on Thursdays, 13:30-15:10 in S105/22
  • All other information will be provided during the course and added here.

The first seminar will be on October 16.

Literature

  • XiaojinZhu and Andrew B.Goldberg. Introduction to Semi-Supervised Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning 2009 3:1. Morgan & Claypool.
  • Anders Søgaard. Semi-Supervised Learning and Domain Adaptation in Natural Language Processing. Synthesis Lectures on Human Language Technologies 2013 6:2. Morgan & Claypool.

Timetable

Introductory lectures on Natural Language Processing and Semi-supervised Learning will be held in the first three regular sessions of the seminar on Thursdays (16.10., 23.10. and 6.11.) 13:30-15:10 in S105/22.

On October 30, there will be a guest lecture by Prof. Jordan Boyd-Graber, title: Thinking on your Feet: Reinforcement Learning for Incremental Language Tasks

The program for the remainder of the seminar will be announced according to number of participants and topics to be discussed.

Lecturers

  • Dr. Judith Eckle-Kohler (office hours: will be announced in the first session, please resigster by e-mail)
  • Silvana Hartmann
  • Prof. Dr. Iryna Gurevych