Artificial Intelligence and Machine Learning
SIREN – Structured Interactive Perception and Learning for Holistic Robotic Embodied Intelligence | |
---|---|
SIREN proposes a unique systemic view of robot learning with a holistic representation of robot and environment as an integrated system. The researchers posit that robot and environment are not separate entities. They co-exist under the same constrained world and physical laws while exchanging information. SIREN aims to uncover this underlying structure and information flow that govern the robot-environment interaction. The goal is to unveil key properties of the action-perception cycle for developing embodied intelligence. The team will, thus, study the intertwined flow of information and energy within the components of our proposed holistic robot-environment system. |
Prof. Georgia Chalvatzaki, Ph.D. Interactive Robot Perception and Learning Lab Project website SIREN Funding duration: 2025 – 2030 |
InterText – Modeling Text as a Living Object in Cross-Document Context | |
---|---|
Over a period of five years, the InterText research project will develop AI methods that enable the processing and analysis of texts and their relationships to each other. Relationships between the texts can for example be contradictions, implicit references or comments. In the age of information overload, this new technology is intended to provide users with a simple summary of complex information on a specific topic, for example, checking for misinformation. |
Prof. Dr. Iryna Gurevych Ubiquious Knowledge Processing Project website InterText Funding duration: 2022 – 2027 |
Visual Robot Programming | |
---|---|
For its new Transition Grant, the European Innovation Council (EIC) has for the first time selected 42 projects from 292 proposals to receive a total of €99 million in EU funding. The very first Transition Grant goes with the highest possible score to computer science professor Jan Peters and his team at the Intelligent Autonomous Systems Group. For their pioneering project “Visual Robot Programming”, they will receive funding of over 1 million euros over two years. The aim of the project is to bring the novel technology of visual robot programming (VRP) to market by 2024. VRP makes it possible to programme robots solely through gestures, without writing a single line of code. With this intuitive technology, workers can teach industrial robots with no prior knowledge of robotics and minimal training. |
Prof. Jan Peters, Ph.D. Intelligent Autonomous Systems Group Funding duration: 2022 – 2024 |
RED – Robust, Explainable Deep Networks in Computer Vision | |
---|---|
The goal of this project is to develop methods that make artificial neural networks in computer vision, particularly so-called deep networks, more robust and more explainable. One particular aim is to increase user trust in machine learning approaches to computer vision, for example in the context of autonomous vehicles. The project ultimately aims to create a toolbox with architectures, algorithms, and best practices for deep neural networks that enable their use in computer vision applications in which robustness is key, data is limited, and user trust is paramount. |
Prof. Stefan Roth, Ph.D. Visual Inference Lab Funding duration: 2020 – 2025 Project Fact Sheet at CORDIS |
Learning Digital Humans in Motion | |
---|---|
The project focusses on image processing and graphic aspects of recording people. The researchers will analyse how people move and develop data-driven motion synthesis methods. The aim is to create a sociable digital human with commercially available hardware. In doing so, the question of whether natural language can be used to reconstruct, depict and model the appearance, movement and interactions of digital humans should be answered. |
Prof. Dr. Justus Thies 3D Graphics & Vision Lab Funding duration: 2025 – 2030 |
IT-Security
HYDRANOS | |
---|---|
In the HYDRANOS project, safety-critical components and mechanisms in the System-on-Chip (SoC) that can lead to cross-layer vulnerabilities and information leaks are systematically investigated and modelled. Dedicated configurabilities for the identified important security-relevant hardware components are designed. These components enable the computing platform to be adapted to changing threat models. A proof-of-concept implementation will then be published as the first European open computing platform with adaptive security architecture. A number of challenges will be addressed: (i) How are security-relevant elements modelled and mapped to configurable entities? (ii) How is the interaction of configurable and static components efficiently realised? (iii) How can the configuration strategies be safely modified and efficiently validated? (iv) What optimisation strategies should be used to balance security, performance, energy consumption and size of the configurable hardware elements and primitives? |
Prof. Dr. Ahmad-Reza Sadeghi System Security Lab Funding duration: 2022 – 2027 |
CRYPTOLAYER – Cryptography for Second Layer Blockchain Protocols | |
---|---|
The aim of the “CRYPTOLAYER” project is to make decentralised blockchain technologies usable for a wide range of applications. These technologies present a new method to perform computations without trusting a central platform provider. For example, they can process payment transactions in a distributed fashion powered by a large number of computers. While this approach results in a very high level of security, it has many disadvantages for mass application. In particular, blockchain computation are currently very expensive, publicly visible and applications cannot communicate with outside world. With the help of the 'CRYPTOLAYER' project, a second protocol layer will run on top of the blockchain. This allows computations to be carried out quickly and at minimal cost. Additionally, a second step is to ensure the confidentiality of transaction data through the use of cryptographic protocols. This creates the prerequisite for running a multitude of applications via decentralised platforms. In addition to the digitalisation of financial products with the help of cryptocurrencies, applications in classic cloud computing, for example, can also benefit from this. |
Prof. Dr. Sebastian Faust Applied Cryptography Group Funding duration: 2022 – 2027 |
PSOTI – Privacy-preserving Services On The Internet | |
---|---|
The main goal of “PSOTI” is to develop privacy-preserving services for commonly used applications on the Internet like data storage, online surveys, and email. These services will provide extensive functionalities and will allow to securely and efficiently store, retrieve, search, and process data. This will allow to comply with the EU General Data Protection Regulation (GDPR) and preserve the fundamental rights to privacy and the protection of personal data. A practical system for secure multi-party computations will be developed which can also be used for the secure processing of other sensitive data such as in the areas of genomics or machine learning. Also protocols for private search queries will be built that even hide the structure of the query and that can be used in multiple application scenarios. |
Prof. Dr.-Ing. Thomas Schneider ENCRYPTO Group Funding duration: 2020 – 2025 Project Fact Sheet atCORDIS |
PRIVTOOLS – Tools for Protecting Data and Function Privacy | |
---|---|
The main goal of “PRIVTOOL” is to improve and unify privacy-preserving technologies and develop tools for their automatic generation. To this end, three different methods are considered in particular. Secure multi-party computation allows several participants to jointly compute a publicly known function without disclosing the secret input data. Private set intersection protocols allow participants to calculate intersections or variants of their secret databases. Private function evaluation enables the secure evaluation of a secret function on secret input data. |
Prof. Dr.-Ing. Thomas Schneider ENCRYPTO Group Funding duration: 2025 – 2030 |
Past Research Programmes
AssemblySkills | |
---|---|
The ERC Proof of Concept project “AssemblySkills” builds on the artificial intelligence methods developed within the ERC Starting Grant “SKILLS4ROBOTS – Policy Learning of Motor Skills for Humanoid Robots”. The latter has yielded a structured, modular control architecture that has the potential to scale robot learning to more complex real-world tasks. In this modular control architecture, elemental building blocks – called movement primitives, are being adapted, sequenced or co-activated simultaneously to fulfill the robot’s tasks. Within the Proof of Concept project “AssemblySkills”, the goal is to group these modules into a complete software package that can enable application-driven robots to learn new skills – particularly in assembly tasks. The value proposition of the project now funded by the ERC Proof of Concept grant is a cost-effective, novel machine learning system that can unlock the potential of manufacturing robots by enabling them to learn to select, adapt and sequence parametrized building blocks such as movement primitives. The approach of Professor Jan Peters’ research team is unique in the sense that it can acquire more than just a single desired trajectory as done in competing approaches, capable of save policy adaptation, requires only little data and can explain the solution to the robot operator. |
Prof. Jan Peters, Ph.D. Intelligent Autonomous Systems Group Funding duration: 2021 – 2022 |
SKILLS4ROBOTS – Policy Learning of Motor Skills for Humanoid Robots | |
---|---|
The goal of SKILLS4ROBOTS is to develop an autonomous skill learning system that enables humanoid robots to acquire and improve a rich set of motor skills. This robot skill learning system will allow scaling of motor abilities up to fully anthropomorphic robots while overcoming the current limitations of skill learning systems to only few degrees of freedom. To achieve this goal, it will decompose complex motor skills into simpler elemental movements – called movement primitives – that serve as building blocks for the higher-level movement strategy and the resulting architecture will be able to address arbitrary, highly complex tasks – up to robot table tennis for a humanoid robots. Learned primitives will be superimposed, sequenced and blended. |
Prof. Jan Peters, Ph.D. Intelligent Autonomous Systems Group Funding duration: 2015 – 2021 Project information at CORDIS |
REScala – A Programming Platform for Reactive Data-intensive Applications | |
---|---|
REScala is a reactive language which integrates concepts from event-based and functional-reactive programming into the object-oriented world. REScala supports the development of reactive applications by fostering a functional and declarative style which complements the advantages of object-oriented design. REScala is a Scala library for functional reactive programming on the JVM and the Web. It provides a rich API for event stream transformations and signal composition with managed consistent up-to-date state and minimal syntactic overhead. It supports concurrent and distributed programs. |
Prof. Dr.-Ing. Mira Mezini Software Technology Group Funding duration: 2019 – 2021 Project Fact Sheet at CORDIS |
PACE – Programming Abstractions for Applications in Cloud Environments | |
---|---|
PACE will deliver first-class linguistic abstractions for expressing sophisticated correlations between data/events to be used as primitives to express high-level functionality. Armed with them, programmers will be relieved from micromanaging data/events and can turn their attention to what the cloud has to offer. Applications become easier to understand, maintain, evolve and more amenable to automated reasoning and sophisticated optimizations. PACE will also deliver language concepts for large-scale modularity, extensibility, and adaptability for capturing highly polymorphic software services. |
Prof. Dr.-Ing. Mira Mezini, Software Technology Group Funding duration: 2013 – 2018 Project Fact Sheet at CORDIS |
VISLIM – Visual Learning and Inference in Joint Scene Models | |
---|---|
This ERC-funded project is concerned with the joint estimation of several scene attributes from one or more images, with the aim of leveraging their dependencies. The project covers aspects of modeling, learning and inference (in) such models. |
Prof. Stefan Roth, Ph.D., Visual Inference Group Funding duration: 2013 – 2018 |