MAKI: Multi-Mechanisms Adaptation for the Future Internet
The Collaborative Research Center (CRC) MAKI is one of DFG’s largest collaborative research activities in computer science, with more than 17M€ funding approved to date. The overarching goal of MAKI is to help to make the Internet, its applications and platforms more adaptive and flexible – most notably at runtime. While many activities of the first phase emphasized better support for mobile applications under changing conditions (user mobility, load fluctuations, etc.), a good deal of the researchers of the current phase exploit the ‘softwarization’ of core and wireless networks in order to allow for large scale adaptation. A category of adaptations denoted as ‘transitions’ is of key importance in MAKI. Transitions replace a ‘mechanism’ by another one of equivalent functionality at runtime, where ‘mechanism’ may denote a protocol, protocol function, strategy, topology, etc. The SUN area is particularly working on the subproject A01 and B02 within this collaborative research center.
SmartEdge: Concepts and Methods for Edge Computing
A critical resource mismatch has been observed in the Internet-of-Things(IoT)context where large volumes of data that are constantly generated by massive devices need to be processed while those devices themselves are resource constrained. By involving the power of cloud computing, cloud-based solutions resolve this mismatch but bring about new concerns over latency, traffic, and privacy. To handle this situation, edge computing was recently proposed by introducing an intermediate tier equipped with computing resources at the network edge. The main goal of the proposed project is to advance this research direction by identifying four major scientific challenges in edge computing and providing a unified platform and solutions to address them. Theoretical foundations, efficient algorithms and mechanisms, as well as reference system architectures, will be produced to guide the design, development, and operation of a modern edge computing system for IoT.
DiSPErse: Learning and Distributed Algorithms for Service Placement in Edge Computing Infrastructures
Cloud services are typically hosted on large, centralized data centers to exploit their flexibility and economies of scale. However, this centralization also means that most services are far away from their users, which can be problematic for services that require low latency or cause a high load on the network. For such applications, the concept of Edge Computing has been proposed: supplementing the centralized data centers with a myriad of distributed micro data centers that are physically much closer to their users. However, as the resources of each individual micro data center are much more inflexible than a typical cloud data center, the questions where to execute which services and where to store which data become highly relevant to provide the best possible service to all users. DiSPErse aims to research learning distributed algorithms for the efficient resource allocation in such a distributed network of micro data centers. A special focus is put on the requirements of critical applications, such as redundancy and data replication, since applications that could benefit from edge computing are likely to depend on its availability.