Talk & Touch Interaction

Research Focus

Technological evolution brought computers into our daily life – our cars, homes and smart-phones surround us with information. Ever decreasing sizes of displays and the fact that mobile computers do not receive our full attention require new concepts to interact with these devices.

Automatic speech recognition and text-to-speech technology offer a promising alternative, to substitute or augment the more conventional user interfaces we grew accustomed to. Multi-touch technology is one of the most interesting graphical interaction concepts, gaining popularity with the newer generation of smart phones. The combination of multi-touch and voice recognition has the potential to dramatically speed-up workflows.

But there are challenges that arise when using the modality of voice, as experienced with more traditional deployments in telephony and desktop environments. Nevertheless, they can be met by a profound knowledge about the design of voice user interfaces on the one hand, and the combination of different modalities on the other. In addition, we are investigating how to support the development of multimodal applications in pervasive environments as they can be found in meeting scenarios, while driving with our car or when controlling our homes. Another important aspect that we are researching includes the social aspects or interaction in our home environments.

Current Projects


Dialog concepts for enabling a smart command & control interface via voice in a house environment. In combination with a semantic background information service for assisting the user during discussions. The goal for the user is that the system gives additional topic-related information, but not disturbing the user. The challenge here is find out possibilities for the design of a smart assistant system at home in the future.

User Interfaces for Brainstorming Meetings with Blind and Sighted Persons

The following research question is addressed in the proposed project: How can appropriate IT based means improve participation of the blind in workplace situations that require intense cooperation with the sighted? We plan scientific contributions in the areas of (a) novel interaction devices and (b) novel interaction techniques, combined with research in the area of (c) eAccessibility. Well matching research labs with considerable experience had to be found for each of the areas (a) – (c); they turned out to be spread over all three countries participating in the ‘lead agency’ cooperation line: Switzerland (ETH Zurich), Germany (TU Darmstadt), and Austria (TU Linz).

Smart Vortex

The goal of Smart Vortex is to provide a technological infrastructure for real time handling of massive product data streams.

Mundo Speech API

Development of a ubiquitous computing speech API to overcome the limitations of embedded devices and to support multiple audio input and output devices, and multiple text-to-speech engines and speech recognizers with different capabilities in a given environment.

Past Projects


We advocate using live video streams not only over larger distances, but also in-situ in smaller, closed events such as soccer matches or concerts. We present CoStream, a mobile live video sharing system and present its iterative design process.


We propose to leverage the hand as an interactive surface for TV remote control as the hand has the potential to be leveraged for device-less and eyes-free TV remote interaction without any third-party mediator device.


The research project Infostrom focuses on technical support for the cooperation of multiple invovled organisations in disaster management and for a coordinated cross-organisational recevory in the case of a large power outage.

Desingning Social Television

Research on social interactive television has been more focused on the creation of communication features. In this work, we show that depending on video content, social television has a greater potential to provide feelings of togetherness if real-life relationships are taken into account.


The Structured Audio Information Retrieval System (STAIRS) project targets environments where workers need access to information, but cannot use traditional hands-and-eyes devices, such as a PDA


Group Leader

Dr. Dirk Schnelle-Walka, Post-Doctoral researcher

Research Staff

Stefan Radomski, Doctoral Researcher

Stephan Radeck-Arneth, Doctoral Researcher

Niloo Dezfuli, Doctoral researcher

Sebastian Döweling, Doctoral Researcher