News Archive

Aktuelles Archiv


Zeit: 09.09.2019, 14:30 Uhr

Ort: Raum 073 im Fraunhofer IGD Fraunhoferstrasse 5, S3|05

Referent: Prof. Dr. Tobias Schreck,

Institute of Computer Graphics and Knowledge Visualization at TU Graz, Austria

Titel: „Visual Analytics Approaches for Data Exploration: Visual Cluster Analysis, Visual Pattern Search, and Immersive Analytics“

Abstract: Visual Analytics approaches support users in interactive data exploration and pattern discovery, relying on data visualization integrated with steerable data analysis algorithms. After a brief introduction to basic ideas of Visual Analytics, we discuss examples of Visual Analytics research from our current work. First, we discuss interactive visual clustering for exploration of time series and trajectory data. Then, we discuss approaches for retrieval, comparison and modeling of visual patterns in high-dimensional data. Thirdly, we discuss ongoing work in immersive analytics of movement data captured in VR-based training applications. We close by highlighting research opportunities, including user guidance and eye tracking as an analytic interaction modality.

CV: Tobias Schreck is a Professor with the Institute of Computer Graphics and Knowledge Visualization at TU Graz, Austria. Between 2011 and 2015, he was an Assistant Professor with the Data Analysis and Visualization Group at University of Konstanz, Germany. Between 2007 and 2011, he was a Postdoc researcher and research group leader with the Interactive-Graphics Systems Group at TU Darmstadt, Germany. Tobias Schreck obtained a PhD in Computer Science in 2006 from the University of Konstanz. His research interests are in Visual Analytics and applied 3D Object Retrieval. Tobias Schreck has served as a papers co-chair for the IEEE VIS Conference on Visual Analytics Science and Technology (VAST) in 2018 and 2017. For more information, please see


Zeit: 05.08.2019, 10:30-11:30 Uhr

Ort: Raum S101|A2, Universitätszentrum, karo 5, Karolinenplatz 5

Referent: Dr. Ilkay Oksuz

King's College London Biomedical Engineering Department

Titel: „Automatic Quality Assessment of Cardiac MRI using Deep Learning Techniques“

Abstract: Cardiovascular disease (CAD) is the major cause of mortality in the world. Recently, Cardiovascular Magnetic Resonance (CMR) techniques have gained ground in diagnosis of cardiovascular disease and good quality of such MR images is a prerequisite for the success of subsequent image analysis pipelines. Quality assessment of medical images is therefore an essential activity and for large population studies such as the UK Biobank (UKBB), manual identification of artefacts such as those caused by unanticipated motion is tedious and time-consuming. In this talk, recent work on detection of wrong cardiac planning and cardiac motion artefacts using deep learning techniques will be described. The details of deep learning architectures and machine learning methodologies will be given with a certain focus on synthetic k-space corruption and curriculum learning techniques. In the last part of the talk, the mechanisms to correct image artefacts will be discussed alongside with their influence on achieving high segmentation accuracy.

Bio: Dr. Ilkay Oksuz is currently a Research Associate in King's College London Biomedical Engineering Department. His current research interests are in medical image segmentation, medical image registration and machine learning, with a focus on the automated analysis and quality control of cardiac MR. He studied for a PhD at the IMT Institute for Advanced Studies Lucca on Computer, Decision, and Systems Science under the supervision of Prof Sotirios Tsaftaris. His PhD thesis focused on joint registration and segmentation of the myocardium region in MR sequences. He joined the Diagnostic Radiology Group at Yale University in 2015 for 10 months as a Postgraduate Fellow, where he worked under the mentorship of Prof Xenios Papademetris. He also worked at the University of Edinburgh Institute for Digital Communications department for six months in 2017.


Poster Präsentation

Deep Generative Models 2019 (by MEC-Lab@GRIS)

Zeit: 16.07.2019, 10:00-11:00 Uhr

Ort: Raum 073 im Fraunhofer IGD, Fraunhoferstrasse 5, S3|05

Referent: Studierende der LV Deep Generative Models


1. Food Interpolator – fluid transition between pizza and burger

2. Generating Instagram images from hashtags

3. Interpolate over the space from the street view house numbers (SVHN) dataset


Zeit: 27.05.2019, 16:00

Ort: Raum 324 im Fraunhofer IGD, Fraunhoferstrasse 5, S3|05

Referent: Markus Lehmann (Betreuer: Jürgen Bernard)

Titel: „Visual-Interactive Combination of Selection Strategies to Improve Data Labeling Processes“ (Masterarbeit)

Abstract:Labeling training data is an important task in Machine Learning for building effective and efficient classifiers. There are different approaches to gather labeled instances for a particular data set. The two most important fields of strategies are Active Learning and Visual-Interactive Labeling. In previous work, these strategies were examined and compared, resulting in a set of atomic labeling strategies. Additionally, a quasi-optimal strategy was analyzed in order to infer knowledge from its behavior. This investigation resulted in two main insights. First, the labeling process consists of different phases. Second, the performance of a strategy depends on the data set and its characteristics.

In this work, we propose a toolkit which enables users to create novel labeling strategies. First, we present multiple visual interfaces users can employ to examine the space of existing algorithms. Then, we introduce a definition of ensembles users can build upon in order to combine existing strategies to novel strategies. Multiple methods to measure the quality of labeling strategies are provided to users, enabling them to examine the gap between their strategies and existing strategies. The different phases of the labeling process are included in the toolkit in order to allow users to always apply the most appropriate strategy in each phase. During the entire process, users are supported by automated guidance in the improvement of their strategies.

We evaluate our concept from different perspectives in order to assess its quality. Overall, we observe that our approach enables users to build ensemble strategies which outperform existing strategies. The insights from this work can be applied to develop novel concepts towards ensemble building as well as to improve the generalization of strategies to other data sets.

Zeit: 22.05.2019, 10:00

Ort: Raum 072 im Fraunhofer IGD, Fraunhoferstrasse 5, S3|05

Referent: Moritz Sachs (Betreuer: Matthias Unbescheiden)

Titel: „Automatisierte Geschäftsmodellanalysen mit Deep Neural Networks“ (Masterarbeit)

Abstract:Ein Hauptkriterium für das Investment eines Venture Capital Fonds in ein Start-up ist dessen Geschäftsmodell. Dieses ist im Businessplan enthalten. Das Screening, sowie die Analyse der eingereichten Businesspläne, erfolgt bei den meisten Venture Capital Fonds überwiegend durch Menschen.

Mit der vorliegenden Arbeit wird untersucht, inwieweit die Analyse der in den Businessplänen enthaltenen Geschäftsmodelle mit Hilfe von Deep Neural Networks automatisiert werden kann. Ziel war die Entwicklung eines Prototypen, der die in den Businessplänen enthaltenen Geschäftsmodelle automatisch extrahiert und in das Metamodell Startup Navigator überführt.

Dem Knowledge Discovery in Databases Prozess folgend wurden hierfür die Businesspläne eines Venture Capital Fonds aufbereitet und damit ein tiefes Convolutional Neural Network, der Multilabel k-Nearest Neighbour Algorithmus, sowie eine Support Vector Machine mit Naive Bayes Features trainiert.

Die Ergebnisse des entwickelten Prototypen zeigen, dass die in den Businessplänen enthaltenen Geschäftsmodelle automatisch extrahiert und in das Metamodell Startup Navigator überführt werden können. Es erscheint plausibel, dass mit mehr Trainingsdaten und einer intensiveren Hyperparameteroptimierung die Korrektklassifizierungsrate verbessert werden kann, sodass der Prototyp zum Aufbau eines Geschäftsmodellkorpus genutzt werden könnte.

Zeit: 30.04.2019, 10:00

Ort: Raum 103 im Fraunhofer IGD, Fraunhoferstrasse 5, S3|05

Referent: Moritz Matthiesen (Betreuer: Pavel Rojtberg)

Titel: „Interpolation von Kalibrierdaten für Zoom und Autofokus Kameras“ (Bachelorarbeit)

Abstract: In dieser Arbeit wird das Problem betrachtet, dass für jede neue Kameraeinstellung eine neue Kalibrierung vorgenommen werden muss.

Ziel dabei ist Kalibrierdaten an bestimmten Kameraeinstellungen zu erstellen, um mithilfe von diesen die Kalibrierdaten von anderen Kameraeinstellungen herzuleiten. Dabei werden die Kalibrierdaten betrachtet und es wird versucht Beziehungen zwischen den einzelnen Parametern der Kalibrierung herzuleiten. Um diese zu ermitteln wird zwischen verschiedenen Parametern der Kalibrierung interpoliert.

Zeit: 29.04.2019, 15:00 Uhr

Ort: Raum 324 im Fraunhofer IGD, Fraunhoferstrasse 5, S3|05

Referent: Ali Jabhe (Betreuer: David Kügler)

Titel: „Physical World Attacks in Medical Imaging“ (Bacheloarbeit)

Abstract: The methodology and the acquisition of images for the attacks on dermoscopy tackles the question of whether Deep-Learning Systems can be attacked by a malicious attacker without changing anything on the Deep-Learning side. That mean only changes on the physical world are allowed. This problem is an extension of the „adversarial attack“ concept, but with some twist.

Zeit: 25.04.2019, 14:00 Uhr

Ort: Raum 324 im Fraunhofer IGD, Fraunhoferstrasse 5, S3|05

Referent: Heiko Reinemuth (Betreuer: Jürgen Bernard)

Titel: „Visual-Interactive Labeling of Multivariate Time Series to Support Semi-Supervised Machine Learning“ (Masterarbeit)

Abstract: Labeling of multivariate time series is an essential requirement of the data-centric decision-making processes in many time-oriented application domains. The basic idea of labeling is to assign (semantic) meaning to specific sections or time steps of the time series and to the time series as a whole, accordingly. Hence, weather phenomena can be characterized, EEG signals can be studied, or movement patterns can be marked in sensor data.

In the context of this work a visual-interactive labeling tool was developed that allows non-expert users to assign semantic meaning to any multivariate time series in an effective and efficient way. Enabling experts as well as non-experts to label multivariate time series in a visual-interactive way has never been proposed in the information visualization and visual analytics research communities before. This thesis combines active learning methods, a visual analytics approach, and novel visual-interactive interfaces to achieve an intuitive data exploration and labeling process for users. Visual guidance based on data analysis and model-based predictions empowers users to select and label meaningful instances from the time series. As a result, the human-centered labeling process is enhanced by algorithmic support, leading to a semi-supervised labeling endeavor combining strengths of both humans and machines.

Zeit: 17.04.2019, 10:00 Uhr

Ort: Raum 242 im Fraunhofer IGD, Fraunhoferstrasse 5, S3|05

Referent: Johann Reinhard (Betreuer: Alan Brunton)

Titel: „Efficient streaming sample-based surface triangulation of voxel data“ (Masterarbeit)

Abstract: Voxel-based discrete representations of 3-dimensional data are widely used in several fields of graphical computing, for instance in the 3D printing driver Cuttlefish. For commonly used techniques, such as the marching cubes algorithm, the creation of a polygonal/polyhedral mesh representation of the used voxel data at high resolutions can become time-consuming and result in meshes with excessive numbers of vertices, which nonetheless introduce „staircase“ artifacts relative to the desired geometry. It is then often necessary to use additional post-processing steps, such as mesh decimation, at the expense of additional computational effort and possible inaccuracies regarding the representation of the original shape.

The goal of this thesis is to simultaneously address all three of these issues, proposing an efficient technique to generate low-polygon meshes, which accurately represent the object’s shape. The intended technique is based on sampling the surface at regions of high curvature using, for example, an importance sampling technique, although different techniques will be explored. A comparison will be made between per-slice and per-chunk sampling (i.e. consider only a single slice or a whole chunk of slices when deciding where to place samples). The samples are to be mapped to a parametric, planar space, allowing to efficiently triangulate the sampled points. The necessity of additional post-processing steps in the parametric or reprojected object space will be assessed. The developed techniques will be implemented, integrated into Cuttlefish and evaluated based on comparisons to standard techniques such as marching cubes or Marching Tetrahedra using the above three measures: efficiency (time and memory), number of polygons in the output, and accuracy. Defining a measure of the accuracy of the output and computing it is a further aspect of the thesis, where at least the Hausdorff distance and collinearity of the surface normals will be measured in order to quantify the mesh quality.

Zeit: 20.03.2019, 15:00 Uhr

Ort: Raum 073 im Fraunhofer IGD, Fraunhoferstrasse 5, S3|05

Referent: Alexander Distergoft (Betreuer: Anirban Mukhopadhyay)

Titlel: „Interpreting Adversarial Examples in Medical Imaging“ (Masterarbeit)

Abstract: Deep neural networks (DNNs) have been achieving high accuracy on many important tasks like image classification, detection or segmentation. Yet, recent discoveries have shown a high degree of susceptibility for these deep-learning algorithms under attack. DNNs seem to be vulnerable to small amounts of non-random noise, created by perturbing the input to output mapping of the network. These perturbations can severely affect the performance of DNNs and thus endanger systems where such models are employed.

The purpose of this thesis is to examine adversarial examples in clinical settings, be it digitally created or physical ones. For this reason we studied the performance of DNNs under the following three attack scenarios:

1. We hypothesize that adversarial examples might occur from incorrect mapping of the image space to the lower dimensional generation manifold. The hypothesis is tested by creating a proxy task of a pose estimation of surgical tools in its simplest form. For this we define a clear decision boundary. We use exhaustive search on a synthetic toy dataset to localize possible reasons of successful one-pixel-attacks in image space.

2. We design a small scale prospective evaluation on how Deep-learning (DL) dermoscopy systems perform under physical world attacks. The publicly available Physical Attacks on Dermoscopy Dataset (PADv1) is used for this evaluation. The introduced susceptibility and robustness values reveal that such attacks lead to accuracy loss across popular state-of-the-art DL-architectures.

3. As a pilot study to understand the vulnerabilities of DNNs that perform under regression tasks we design a set of auxiliary tasks that are used to create adversarial examples for non-classification-models. We train auxiliary networks on augmented datasets to satisfy the defined auxiliary tasks and create adversarial examples that might influence the decision of a regression model without knowing about the underlying system or hyperparameters.