Software is modified regularly. Bugs get fixed, new functionality is added or existing functionality is adapted. After each change, the software must be verified. Instead of restarting the verification from scratch, incremental verification aims at using information from previous verification runs to simplify the verification of the modified software.
In cooperation with the , we study techniques that aim at the verification of modified paths only. To this end, we develop approaches that identify modified paths and output them in an exchangeable format (named condition). Then, the verifier is guided by the condition to only explore modified paths. research group of Prof. Dirk Beyer
Also, we develop techniques to reuse information provided by one verifier when verifying a modified program with a different verifier.
Testing is a widely used method to assess software quality and often automatic approaches are used to generate test cases. Verification techniques are well-suited for automatic test-case generation. They only need to check the reachability of all test goals and output test cases for all goals that are proved reachable.
To build more efficient and effective test-generation tools based on verification technology, our research focuses on combinations of different verification approaches. We develop CoVeriTest, a test-generation approach that relies on cooperative combinations of verification approaches and has successfully participated in the international competitions on software testing.
Combination of Verification Approaches
Verifiers have different strength and weaknesses. For some programs, none of the existing, single verifiers is sufficient to verify the program. Instead of developing new, better verifiers, we aim at an easy combination of existing verifiers.
Together with the groups of and Prof. Dirk Beyer, we develop techniques for cooperative software verification. We are interested in the following research questions: To cooperate successfully, which information should be exchanged between verifiers and how? What must be done to let an arbitrary verifier make use of exchanged information? How to combine information resulting from different cooperation partners? To answer some of these questions we consider the idea of conditional model checking. Prof. Heike Wehrheim
Optimizing Runtime Monitoring
Runtime monitoring is a popular approach to ensure that programs meet certain correctness properties. To this end, a monitor observes the program execution and reacts when a property violation is about to occur. Observing a single execution simplifies the analysis and can make it more precise because information about the execution can be used. In contrast, the monitor introduces a performance overhead. At worst, the overhead is so large that monitoring is no longer practical. Therefore, optimizations for runtime monitoring aim at reducing the performance overhead.
Together with the research group of we study approaches to optimize runtime monitoring with the purpose of formalizing them in a common framework and analyzing them, e.g., proving that the optimizations do not violate runtime monitoring properties like soundness. Furthermore, we go beyond experimental evaluation of optimization approaches and also analyze their performance analytically. Prof. Dr. Heiko Mantel
Validation of Verification Results
A verification result are not always correct. Bugs in the verifier, wrong assumptions, or conceptual weaknesses of the verification approach may result in wrong answers, either a false alarm (non-existing error) or a false proof. Instead of trusting the verifier‘s result, one should require that the verifier provides evidence for its results and check this evidence independently.
In the past, we mainly focused on providing evidence for and checking results from the verification of sequential programs. Our current focus is on explaining and checking results of the verification of parallel programs.