Seminar: Advanced Approaches in Software Verification (AASV)
Lecturer: Prof. Dr. Marie-Christine Jakobs
The seminar deals with current research topics of the research group Semantics and Verification of Parallel Systems. You will learn about classical and recent research in the area of software verification (i.e., model checking, program analysis, testing, etc.). The current topics are listed here.
Important Dates in WS 2021/22
- Initial Meeting: 2021-10-25, 3:.00pm
(Details see Moodle)
- Topic Choice: online via Moodle until 2021-10-31
- Presentations: TBA
- Submission deadlines
- 2021-11-14 Submission of planned paper structure
- 2021-11-28 Submission of slide draft
- 2022-01-30 Submission of paper draft
- 2022-03-20 Submission of final paper
Topics in WS2021/22
This seminar studies topics on automatic test-case generation. The goal of test-case generation is to find test inputs that reveal bugs in the the program or to find a set of test inputs that achieve a certain coverage value. In this seminar, we look into four different types of automatic test-case generation: (1) random input generation, (2) search-based input generation, (3) input generation with symbolic execution, and (4) input generation with reachability analyses.
Random Test Generation
Koen Claessen, John Hughes: QuickCheck: A Lightweight Tool for Random Testing of Haskell Programs. ICFP, pp. 268-279, ACM, 2000.
Carlos Pacheco, Shuvendu K. Lahiri, Michael D. Ernst, Thomas Ball: Feedback-Directed Random Test Generation. ICSE, pp. 75-84, IEEE 2007.
Search-based Test Generation
Chandrasekhar Boyapati, Sarfraz Khurshid, Darko Marinov: Korat: Automated Testing Based on Java Predicates. ISSTA, pp. 123-133, ACM, 2002.
Gordon Fraser, Andrea Arcuri: Evolutionary Generation of Whole Test Suites. QSIC, pp. 31-40, IEEE, 2011.
Test Generation with Symbolic Execution
Cristian Cadar, Daniel Dunbar, Dawson R. Engler: KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs. OSDI, pp. 209-224, USENIX, 2008.
Nicky Williams, Bruno Marre, Patricia Mouy, Muriel Roger: PathCrawler: Automatic Generation of Path Tests by Combining Static and Dynamic Analysis. EDCC, pp. 281-292, Springer, 2005.
Nikolai Tillmann, Jonathan de Halleux: Pex-White Box Test Generation for .NET. TAP, pp. 134-153, Springer, 2008.
Koushik Sen, Darko Marinov, Gul Agha: CUTE: A Concolic Unit Testing Engine for C. FSE, pp. 263-272, ACM, 2005.
Patrice Godefroid, Michael Y. Levin, David A. Molnar: Automated Whitebox Fuzz Testing. NDSS 2008
Test Generation with Reachability Analyses
Dirk Beyer, Adam Chlipala, Thomas A. Henzinger, Ranjit Jhala, Rupak Majumdar: Generating Tests from Counterexamples. ICSE, pp. 326-335, IEEE, 2004.
Andreas Holzer, Christian Schallhart, Michael Tautschnig, Helmut Veith: FShell: Systematic Test Case Generation for Dynamic Analysis and Measurement. CAV, pp. 209-213, Springer, 2008.
Andreas Holzer, Christian Schallhart, Michael Tautschnig, Helmut Veith: Query-Driven Program Testing. VMCAI, pp. 151-166, Springer, 2009.
Dirk Beyer, Marie-Christine Jakobs: CoVeriTest: Cooperative Verifier-Based Testing. FASE, pp. 389-408, Springer, 2019.
Under the guidance of your supervisor you will
- use the given literature and search for additional literature to become acquainted with your topic,
- prepare and give a presentation about your topic and afterwards discuss the topic with the other participants,
- write a scientific report, which provides a summary of your topic.
At the end of the seminar, students are able to autonomously familiarize themselves with a scientific topic and are able to present this topic to a heterogeneous audience orally and in written form.
More concrete, the students can search for scientific literature and assess the relevance of found literature. They are able to identify the main content of a scientific publication and critically evaluate them. Moreover, they can compare different scientific approaches. Students can explain and defend their topic and results to a heterogeneous audience in an oral presentation. Additionally, they are able to describe their topic and results in a scientific paper.
Recommended. Knowledge about computer science and mathematics taught int the first four terms of the bachelor degree in computer science
Helpful. Participation in a course of the research group Semantics and Verification of Parallel Systems
Seminar Topics in Past Terms
Topics in SS 2021
This seminar studies topics from our research area combination of validation of verification results. The goal of validation is to increase the trust in the validity of the result. Validation allows one to check that the verification was actually done and that the verification tool worked properly. In this seminar, we look into (1) protocols for validation, (2) validation techniques for several analyses like dataflow analysis, abstract interpretation, or model checking.
- Proof-Carrying Code
George C. Necula: Enforcing Security and Safety with Proof-Carrying Code, MFPS 1999.
George C. Necula, Peter Lee: Safe, Untrusted Agents Using Proof-Carrying Code, Mobile Agents and Security 1998.
- Programs from Proofs
Daniel Wonisch, Alexander Schremmer, Heike Wehrheim: Programs from Proofs – A PCC Alternative, CAV 2013.
- Certifying Algorithms
Ross M. McConnell, Kurt Mehlhorn, Stefan Näher, Pascal Schweitzer: Certifying algorithms, Comput. Sci. Rev. 5(2), 2011.
- Amanat Protocol
Sagar Chaki, Christian Schallhart, Helmut Veith: Verification Across Intellectual Property Boundaries, CAV 2007.
- Violation and Correctness Witnesses
Dirk Beyer, Matthias Dangl, Daniel Dietsch, Matthias Heizmann, Andreas Stahlbauer: Witness validation and stepwise testification across software verifiers, FSE 2015.
Dirk Beyer, Matthias Dangl, Daniel Dietsch, Matthias Heizmann: Correctness witnesses: exchanging verification results between verifiers, FSE 2016.
Validating Model Checking Results
- Temporal Safety Proofs
Thomas A. Henzinger, Ranjit Jhala, Rupak Majumdar, George C. Necula, Grégoire Sutre, Westley Weimer: Temporal-Safety Proofs for Systems Code, CAV 2002.
- Extreme Model Checking
Thomas A. Henzinger, Ranjit Jhala, Rupak Majumdar, Marco A. A. Sanvido: Extreme Model Checking, Verification: Theory and Practice 2003.
Validating Dataflow Analysis and Abstract Interpretation Results
- Lightweight Bytecode Verification
Eva Rose: Lightweight Bytecode Verification, J. Autom. Reason. 31(3-4), 2003.
- Validating Dataflow Annotations
Wolfram Amme, Marc-André Möller, Philipp Adler: Data Flow Analysis as a General Concept for the Transport of Verifiable Program Annotations, Electron. Notes Theor. Comput. Sci. 176(3), 2007.
- Hoare Proofs from Abstract Interpretation
Sunae Seo, Hongseok Yang, Kwangkeun Yi: Automatic Construction of Hoare Proofs from Abstract Interpretation Results, APLAS 2003.
- Abstraction-Carrying Code
Elvira Albert, Germán Puebla, Manuel V. Hermenegildo: Abstraction-Carrying Code, LPAR 2004.
- Configurable Program Certification
Marie-Christine Jakobs, Heike Wehrheim: Certification for configurable program analysis, SPIN 2014.
Marie-Christine Jakobs: Speed Up Configurable Certificate Validation by Certificate Reduction and Partitioning, SEFM 2015.
Topics in WS 2020/21
This seminar studies topics from our research area combination of verification approaches, especially we look into cooperative verification approaches. Cooperative verification approaches aim at leveraging the strengths of different verification approaches. We consider three types of cooperative verification approaches: (1) testification of alarms, (2) distribution of verification effort, and (3) algorithmic integration of different verification approaches.
The goal of alarm testification is the reduction of false alarms and to focus on alarms that can be reproduced by a real test.
To distribute the verification effort among different verification approaches, different parts of the verification (e.g., program paths) need to be inspected by distinct verification tools. Often, sequential combinations are used in which the follow-up tool only inspects the remaining proof obligations.
The algorithmic integration of different verification approaches idea is to develop new verification algorithms that use existing verification approaches as components.
Testification of Alarms
Christoph Csallner, Yannis Smaragdakis: Check 'n' crash: combining static checking and testing. ICSE 2005.
Christoph Csallner, Yannis Smaragdakis: DSD-Crasher: a hybrid analysis tool for bug finding. ISSTA 2006.
- SANTE: Static Analysis and Testing
Omar Chebaro, Nikolai Kosmatov, Alain Giorgetti, Jacques Julliand: Program slicing enhances a verification technique combining static and dynamic analysis. SAC 2012.
- Tests from Witnesses
Dirk Beyer, Matthias Dangl, Thomas Lemberger, Michael Tautschnig: Tests from Witnesses – Execution-Based Validation of Verification Results. TAP 2018.
Distribution of Verification Effort
- Failure-Directed Program Trimming
Kostas Ferles, Valentin Wüstholz, Maria Christakis, Isil Dillig: Failure-directed program trimming. ESEC/SIGSOFT FSE 2017
- Collaboration via Verification Annotations
Maria Christakis, Peter Müller, Valentin Wüstholz: Guiding dynamic symbolic execution toward unverified program executions. ICSE 2016.
- CoDiDroid: Cooperative Android Analysis
Felix Pauck, Heike Wehrheim: Together strong: cooperative Android app analysis. ESEC/SIGSOFT FSE 2019.
Algorithmic Integration of Different Verification Approaches
- Combining k-Induction with Invariant Generation
Dirk Beyer, Matthias Dangl, Philipp Wendler: Boosting k-Induction with Continuously-Refined Invariants. CAV 2015.
- Synergy: Using Test Generation for Counterexample Check and Identification of Refinement State
Bhargav S. Gulavani, Thomas A. Henzinger, Yamini Kannan, Aditya V. Nori, Sriram K. Rajamani: SYNERGY: a new algorithm for property checking. FSE 2006.
- Dash: Using Test Generation for Counterexample Check and Refinement
Ashutosh Gupta, Rupak Majumdar, Andrey Rybalchenko: From Tests to Proofs. TACAS 2009.
- Ufo: Integrating Under-Approximations and Over-Approximations
Aws Albarghouthi, Arie Gurfinkel, Marsha Chechik: From Under-Approximations to Over-Approximations and Back. TACAS 2012.
- kIkI: Combining k-Invariants with k-Induction
Martin Brain, Saurabh Joshi, Daniel Kroening, Peter Schrammel: Safety Verification and Refutation by k-Invariants and k-Induction. SAS 2015.
Topics in SS 2020
The seminar has been cancelled (reason: no participants). The seminar topics of this semester will therefore be discussed next semester.
Topics in WS 2019/20
This seminar studies topics from our research area incremental verification. Techniques in incremental verification are a key enabler for applying formal verification in modern software project. To deal with dozens of rapid changes, incremental verification techniques aim at a more efficient reverification of a modified program. For this purpose, incremental verification reuses information of a previous verification run. The kind of information and its use distinguish the different approaches. In this seminar, we look at four different classes of incremental verification: functional equivalence checking, reuse of intermediate results, proof adaption, and skipped verification.
In functional equivalence checking, it is verified that modified program parts show the same input/output behavior as the part before the verification.
Techniques reusing intermediate results avoid reexploration of known results, e.g., invariants, satisfiability results, or abstraction level. Their assumption is that a large portion of the intermediate results is still valid and needed for the verification of the modified program.
Approaches for proof adaption check which proof parts remain valid and perform a verification step for invalid proof parts.
The fourth class skipped verification is related to the class of proof adaption. Techniques in this class stop verification as soon as modification can be reached or perform a lightweight verification as long as no modification has been reached.
Functional Equivalence Checking
- Regression Verification
Benny Godlin, Ofer Strichman: Regression verification. DAC 2009
- Relation Regression Verification
Dennis Felsing, Sarah Grebing, Vladimir Klebanov, Philipp Rümmer, Mattias Ulbrich: Automating regression verification. ASE 2014
David A. Ramos, Dawson R. Engler: Practical, Low-Effort Equivalence Verification of Real Code. CAV 2011
Reusing Intermediate Results
- Constraints Reuse
Willem Visser, Jaco Geldenhuys, Matthew B. Dwyer: Green: reducing, reusing and recycling constraints in program analysis. FSE 2012
- Precision Reuse
Dirk Beyer, Stefan Löwe, Evgeny Novikov, Andreas Stahlbauer, Philipp Wendler: Precision reuse for efficient regression verification. ESEC/SIGSOFT FSE 2013
- Reusing Trace Abstractions
Bat-Chen Rothenberg, Daniel Dietsch, Matthias Heizmann: Incremental Verification Using Trace Abstraction. SAS 2018
- Incremental Upgrade Checking
Ondrej Sery, Grigory Fedyukovich, Natasha Sharygina: Incremental upgrade checking by means of interpolation-based function summaries. FMCAD 2012
- Extreme Model Checking
Thomas A. Henzinger, Ranjit Jhala, Rupak Majumdar, Marco A. A. Sanvido: Extreme Model Checking. Verification: Theory and Practice 2003
- Directed Incremental Verification
Suzette Person, Guowei Yang, Neha Rungta, Sarfraz Khurshid: Directed incremental symbolic execution. PLDI 2011
- Memoized Symbolic Execution
Guowei Yang, Corina S. Pasareanu, Sarfraz Khurshid: Memoized symbolic execution. ISSTA 2012
- Regression Model Checking
Guowei Yang, Matthew B. Dwyer, Gregg Rothermel: Regression model checking. ICSM 2009
Ahmet Çelik, Karl Palmskog, Milos Gligoric: iCoq: regression proof selection for large-scale verification projects. ASE 2017
Wenxi Wang, Kaiyuan Wang, Milos Gligoric, Sarfraz Khurshid: Incremental Analysis of Evolving Alloy Models. TACAS 2019