Veranstalter: SFB MAKI
Referent: Volker Tresp, Ludwig-Maximilians-Universität München
Labeled graphs can describe states and events at a cognitive abstraction level, representing facts as subject-predicate-object triples. A prominent and very successful example is the Google Knowledge Graph, representing on the order of 100B facts. Labeled graphs can be represented as adjacency tensors which can serve as inputs for prediction and decision making, and from which tensor models can be derived to generalize to unseen facts. We show how these ideas can be used, together with deep recurrent networks, for clinical decision support by predicting orders and outcomes. Following Goethe’s proverb, “you only see what you know”, we show how background knowledge can dramatically improve information extraction from images by deep convolutional networks and how tensor train models can be used for the efficient classification of videos. We discuss potential links to the memory and perceptual systems of the human brain. We conclude that tensor models, in connection with deep learning, can be the basis for many technical solutions requiring memory and perception, and might be a basis for modern AI.
Volker Tresp is a Distinguished Research Scientist at Siemens and a Professor for Machine Learning at the Ludwig Maximilian University of Munich (LMU). He received a Diploma degree from the University of Goettingen, Germany, in 1984 and the M.Sc. and Ph.D. degrees from Yale University, New Haven, CT, in 1986 and 1989 respectively. Since 1989 he has been the head of various research teams in machine learning at Siemens, Research and Technology. He filed more than 70 patent applications and was inventor of the year of Siemens in 1996. He has published more than 150 scientific articles and administered over 25 Ph.D. theses. The company Panoratio is a spin-off out of his team. His research focus in recent years has been “Machine Learning in Information Networks” for modelling Knowledge Graphs, medical decision processes, perception, and cognitive memory functions. He has been the consortium lead of a number of publicly funded projects. Since 2011 he is also a Professor at the Ludwig Maximilian University of Munich where he teaches an annual course on Machine Learning.